2026-02-04 01:18:36.060610 | Job console starting 2026-02-04 01:18:36.073438 | Updating git repos 2026-02-04 01:18:36.637562 | Cloning repos into workspace 2026-02-04 01:18:36.852297 | Restoring repo states 2026-02-04 01:18:36.872169 | Merging changes 2026-02-04 01:18:36.872190 | Checking out repos 2026-02-04 01:18:37.128768 | Preparing playbooks 2026-02-04 01:18:37.771498 | Running Ansible setup 2026-02-04 01:18:42.297507 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-04 01:18:43.071037 | 2026-02-04 01:18:43.071198 | PLAY [Base pre] 2026-02-04 01:18:43.087812 | 2026-02-04 01:18:43.087934 | TASK [Setup log path fact] 2026-02-04 01:18:43.117950 | orchestrator | ok 2026-02-04 01:18:43.135270 | 2026-02-04 01:18:43.135404 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-04 01:18:43.177317 | orchestrator | ok 2026-02-04 01:18:43.189971 | 2026-02-04 01:18:43.190169 | TASK [emit-job-header : Print job information] 2026-02-04 01:18:43.234982 | # Job Information 2026-02-04 01:18:43.235232 | Ansible Version: 2.16.14 2026-02-04 01:18:43.235285 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-04 01:18:43.235336 | Pipeline: periodic-midnight 2026-02-04 01:18:43.235371 | Executor: 521e9411259a 2026-02-04 01:18:43.235403 | Triggered by: https://github.com/osism/testbed 2026-02-04 01:18:43.235437 | Event ID: ae64838415194271b89fad81bc239d83 2026-02-04 01:18:43.244305 | 2026-02-04 01:18:43.244428 | LOOP [emit-job-header : Print node information] 2026-02-04 01:18:43.384995 | orchestrator | ok: 2026-02-04 01:18:43.385293 | orchestrator | # Node Information 2026-02-04 01:18:43.385328 | orchestrator | Inventory Hostname: orchestrator 2026-02-04 01:18:43.385352 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-04 01:18:43.385374 | orchestrator | Username: zuul-testbed03 2026-02-04 01:18:43.385395 | orchestrator | Distro: Debian 12.13 2026-02-04 01:18:43.385419 | orchestrator | Provider: static-testbed 2026-02-04 01:18:43.385439 | orchestrator | Region: 2026-02-04 01:18:43.385460 | orchestrator | Label: testbed-orchestrator 2026-02-04 01:18:43.385479 | orchestrator | Product Name: OpenStack Nova 2026-02-04 01:18:43.385499 | orchestrator | Interface IP: 81.163.193.140 2026-02-04 01:18:43.415327 | 2026-02-04 01:18:43.415513 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-04 01:18:43.909731 | orchestrator -> localhost | changed 2026-02-04 01:18:43.929602 | 2026-02-04 01:18:43.929782 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-04 01:18:45.011346 | orchestrator -> localhost | changed 2026-02-04 01:18:45.031722 | 2026-02-04 01:18:45.031844 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-04 01:18:45.339457 | orchestrator -> localhost | ok 2026-02-04 01:18:45.347022 | 2026-02-04 01:18:45.347154 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-04 01:18:45.377574 | orchestrator | ok 2026-02-04 01:18:45.395974 | orchestrator | included: /var/lib/zuul/builds/5d4c0549b7dc4b04b9061401cc85362e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-04 01:18:45.404413 | 2026-02-04 01:18:45.404519 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-04 01:18:46.393272 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-04 01:18:46.393839 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/5d4c0549b7dc4b04b9061401cc85362e/work/5d4c0549b7dc4b04b9061401cc85362e_id_rsa 2026-02-04 01:18:46.394058 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/5d4c0549b7dc4b04b9061401cc85362e/work/5d4c0549b7dc4b04b9061401cc85362e_id_rsa.pub 2026-02-04 01:18:46.394157 | orchestrator -> localhost | The key fingerprint is: 2026-02-04 01:18:46.394233 | orchestrator -> localhost | SHA256:cOV3/8IrQrycjwNmSSOPISzswjx02+Q5t6hgZrrLTy8 zuul-build-sshkey 2026-02-04 01:18:46.394304 | orchestrator -> localhost | The key's randomart image is: 2026-02-04 01:18:46.394396 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-04 01:18:46.394463 | orchestrator -> localhost | | . | 2026-02-04 01:18:46.394527 | orchestrator -> localhost | | o | 2026-02-04 01:18:46.394585 | orchestrator -> localhost | | . . . . . . . | 2026-02-04 01:18:46.394642 | orchestrator -> localhost | | .o.o.ooo . . . | 2026-02-04 01:18:46.394700 | orchestrator -> localhost | |+...=..*S+ .| 2026-02-04 01:18:46.394769 | orchestrator -> localhost | |.+.. =..* o . .| 2026-02-04 01:18:46.394829 | orchestrator -> localhost | | *.. +o.+ o o .| 2026-02-04 01:18:46.394981 | orchestrator -> localhost | |* oE.. . *.. o | 2026-02-04 01:18:46.395075 | orchestrator -> localhost | |++.oo. .+... | 2026-02-04 01:18:46.395139 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-04 01:18:46.395279 | orchestrator -> localhost | ok: Runtime: 0:00:00.449577 2026-02-04 01:18:46.410325 | 2026-02-04 01:18:46.410475 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-04 01:18:46.451412 | orchestrator | ok 2026-02-04 01:18:46.465414 | orchestrator | included: /var/lib/zuul/builds/5d4c0549b7dc4b04b9061401cc85362e/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-04 01:18:46.474819 | 2026-02-04 01:18:46.474953 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-04 01:18:46.499045 | orchestrator | skipping: Conditional result was False 2026-02-04 01:18:46.508341 | 2026-02-04 01:18:46.508443 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-04 01:18:47.152088 | orchestrator | changed 2026-02-04 01:18:47.161958 | 2026-02-04 01:18:47.162125 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-04 01:18:47.494886 | orchestrator | ok 2026-02-04 01:18:47.506045 | 2026-02-04 01:18:47.506183 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-04 01:18:48.147652 | orchestrator | ok 2026-02-04 01:18:48.156842 | 2026-02-04 01:18:48.156969 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-04 01:18:48.612347 | orchestrator | ok 2026-02-04 01:18:48.622025 | 2026-02-04 01:18:48.622154 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-04 01:18:48.647736 | orchestrator | skipping: Conditional result was False 2026-02-04 01:18:48.659337 | 2026-02-04 01:18:48.659478 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-04 01:18:49.101100 | orchestrator -> localhost | changed 2026-02-04 01:18:49.125825 | 2026-02-04 01:18:49.125972 | TASK [add-build-sshkey : Add back temp key] 2026-02-04 01:18:49.475047 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/5d4c0549b7dc4b04b9061401cc85362e/work/5d4c0549b7dc4b04b9061401cc85362e_id_rsa (zuul-build-sshkey) 2026-02-04 01:18:49.475739 | orchestrator -> localhost | ok: Runtime: 0:00:00.019259 2026-02-04 01:18:49.491171 | 2026-02-04 01:18:49.491321 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-04 01:18:49.934506 | orchestrator | ok 2026-02-04 01:18:49.942927 | 2026-02-04 01:18:49.943070 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-04 01:18:49.977699 | orchestrator | skipping: Conditional result was False 2026-02-04 01:18:50.039654 | 2026-02-04 01:18:50.039806 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-04 01:18:50.491370 | orchestrator | ok 2026-02-04 01:18:50.505093 | 2026-02-04 01:18:50.505209 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-04 01:18:50.553239 | orchestrator | ok 2026-02-04 01:18:50.564894 | 2026-02-04 01:18:50.565057 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-04 01:18:50.883416 | orchestrator -> localhost | ok 2026-02-04 01:18:50.891792 | 2026-02-04 01:18:50.891902 | TASK [validate-host : Collect information about the host] 2026-02-04 01:18:52.146158 | orchestrator | ok 2026-02-04 01:18:52.159861 | 2026-02-04 01:18:52.159974 | TASK [validate-host : Sanitize hostname] 2026-02-04 01:18:52.224605 | orchestrator | ok 2026-02-04 01:18:52.233661 | 2026-02-04 01:18:52.233790 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-04 01:18:52.824056 | orchestrator -> localhost | changed 2026-02-04 01:18:52.836951 | 2026-02-04 01:18:52.837168 | TASK [validate-host : Collect information about zuul worker] 2026-02-04 01:18:53.321732 | orchestrator | ok 2026-02-04 01:18:53.330343 | 2026-02-04 01:18:53.330483 | TASK [validate-host : Write out all zuul information for each host] 2026-02-04 01:18:53.874070 | orchestrator -> localhost | changed 2026-02-04 01:18:53.885133 | 2026-02-04 01:18:53.885245 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-04 01:18:54.235446 | orchestrator | ok 2026-02-04 01:18:54.245781 | 2026-02-04 01:18:54.245919 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-04 01:19:17.345799 | orchestrator | changed: 2026-02-04 01:19:17.346094 | orchestrator | .d..t...... src/ 2026-02-04 01:19:17.346786 | orchestrator | .d..t...... src/github.com/ 2026-02-04 01:19:17.346830 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-04 01:19:17.346885 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-04 01:19:17.346912 | orchestrator | RedHat.yml 2026-02-04 01:19:17.362871 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-04 01:19:17.362889 | orchestrator | RedHat.yml 2026-02-04 01:19:17.362941 | orchestrator | = 2.2.0"... 2026-02-04 01:19:28.679235 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-04 01:19:28.697190 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-02-04 01:19:28.852300 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-04 01:19:29.290505 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-04 01:19:29.756018 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-04 01:19:30.415113 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-04 01:19:30.875772 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-04 01:19:31.734556 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-04 01:19:31.734604 | orchestrator | 2026-02-04 01:19:31.734610 | orchestrator | Providers are signed by their developers. 2026-02-04 01:19:31.734615 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-04 01:19:31.734621 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-04 01:19:31.734633 | orchestrator | 2026-02-04 01:19:31.734637 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-04 01:19:31.734647 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-04 01:19:31.734651 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-04 01:19:31.734655 | orchestrator | you run "tofu init" in the future. 2026-02-04 01:19:31.734901 | orchestrator | 2026-02-04 01:19:31.734914 | orchestrator | OpenTofu has been successfully initialized! 2026-02-04 01:19:31.734919 | orchestrator | 2026-02-04 01:19:31.734923 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-04 01:19:31.734927 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-04 01:19:31.734931 | orchestrator | should now work. 2026-02-04 01:19:31.734935 | orchestrator | 2026-02-04 01:19:31.734942 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-04 01:19:31.734949 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-04 01:19:31.734953 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-04 01:19:31.916332 | orchestrator | Created and switched to workspace "ci"! 2026-02-04 01:19:31.916369 | orchestrator | 2026-02-04 01:19:31.916378 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-04 01:19:31.916386 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-04 01:19:31.916411 | orchestrator | for this configuration. 2026-02-04 01:19:32.038540 | orchestrator | ci.auto.tfvars 2026-02-04 01:19:32.041314 | orchestrator | default_custom.tf 2026-02-04 01:19:32.920034 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-04 01:19:33.475993 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-04 01:19:33.709403 | orchestrator | 2026-02-04 01:19:33.709457 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-04 01:19:33.709463 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-04 01:19:33.709484 | orchestrator | + create 2026-02-04 01:19:33.709498 | orchestrator | <= read (data resources) 2026-02-04 01:19:33.709511 | orchestrator | 2026-02-04 01:19:33.709515 | orchestrator | OpenTofu will perform the following actions: 2026-02-04 01:19:33.709604 | orchestrator | 2026-02-04 01:19:33.709616 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-04 01:19:33.709621 | orchestrator | # (config refers to values not yet known) 2026-02-04 01:19:33.709626 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-04 01:19:33.709629 | orchestrator | + checksum = (known after apply) 2026-02-04 01:19:33.709633 | orchestrator | + created_at = (known after apply) 2026-02-04 01:19:33.709637 | orchestrator | + file = (known after apply) 2026-02-04 01:19:33.709641 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.709658 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.709662 | orchestrator | + min_disk_gb = (known after apply) 2026-02-04 01:19:33.709666 | orchestrator | + min_ram_mb = (known after apply) 2026-02-04 01:19:33.709670 | orchestrator | + most_recent = true 2026-02-04 01:19:33.709674 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.709678 | orchestrator | + protected = (known after apply) 2026-02-04 01:19:33.709682 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.709687 | orchestrator | + schema = (known after apply) 2026-02-04 01:19:33.709691 | orchestrator | + size_bytes = (known after apply) 2026-02-04 01:19:33.709695 | orchestrator | + tags = (known after apply) 2026-02-04 01:19:33.709699 | orchestrator | + updated_at = (known after apply) 2026-02-04 01:19:33.709703 | orchestrator | } 2026-02-04 01:19:33.709778 | orchestrator | 2026-02-04 01:19:33.709790 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-04 01:19:33.709811 | orchestrator | # (config refers to values not yet known) 2026-02-04 01:19:33.709816 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-04 01:19:33.709820 | orchestrator | + checksum = (known after apply) 2026-02-04 01:19:33.709824 | orchestrator | + created_at = (known after apply) 2026-02-04 01:19:33.709828 | orchestrator | + file = (known after apply) 2026-02-04 01:19:33.709831 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.709835 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.709839 | orchestrator | + min_disk_gb = (known after apply) 2026-02-04 01:19:33.709843 | orchestrator | + min_ram_mb = (known after apply) 2026-02-04 01:19:33.709847 | orchestrator | + most_recent = true 2026-02-04 01:19:33.709850 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.709854 | orchestrator | + protected = (known after apply) 2026-02-04 01:19:33.709858 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.709862 | orchestrator | + schema = (known after apply) 2026-02-04 01:19:33.709865 | orchestrator | + size_bytes = (known after apply) 2026-02-04 01:19:33.709869 | orchestrator | + tags = (known after apply) 2026-02-04 01:19:33.709873 | orchestrator | + updated_at = (known after apply) 2026-02-04 01:19:33.709877 | orchestrator | } 2026-02-04 01:19:33.709949 | orchestrator | 2026-02-04 01:19:33.709961 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-04 01:19:33.709966 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-04 01:19:33.709970 | orchestrator | + content = (known after apply) 2026-02-04 01:19:33.709974 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 01:19:33.709977 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 01:19:33.709981 | orchestrator | + content_md5 = (known after apply) 2026-02-04 01:19:33.709985 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 01:19:33.709989 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 01:19:33.709992 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 01:19:33.709996 | orchestrator | + directory_permission = "0777" 2026-02-04 01:19:33.710000 | orchestrator | + file_permission = "0644" 2026-02-04 01:19:33.710004 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-04 01:19:33.710008 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710011 | orchestrator | } 2026-02-04 01:19:33.710096 | orchestrator | 2026-02-04 01:19:33.710108 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-04 01:19:33.710112 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-04 01:19:33.710116 | orchestrator | + content = (known after apply) 2026-02-04 01:19:33.710120 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 01:19:33.710124 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 01:19:33.710127 | orchestrator | + content_md5 = (known after apply) 2026-02-04 01:19:33.710131 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 01:19:33.710135 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 01:19:33.710145 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 01:19:33.710149 | orchestrator | + directory_permission = "0777" 2026-02-04 01:19:33.710152 | orchestrator | + file_permission = "0644" 2026-02-04 01:19:33.710160 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-04 01:19:33.710164 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710168 | orchestrator | } 2026-02-04 01:19:33.710231 | orchestrator | 2026-02-04 01:19:33.710242 | orchestrator | # local_file.inventory will be created 2026-02-04 01:19:33.710246 | orchestrator | + resource "local_file" "inventory" { 2026-02-04 01:19:33.710250 | orchestrator | + content = (known after apply) 2026-02-04 01:19:33.710254 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 01:19:33.710258 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 01:19:33.710261 | orchestrator | + content_md5 = (known after apply) 2026-02-04 01:19:33.710265 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 01:19:33.710269 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 01:19:33.710273 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 01:19:33.710277 | orchestrator | + directory_permission = "0777" 2026-02-04 01:19:33.710281 | orchestrator | + file_permission = "0644" 2026-02-04 01:19:33.710284 | orchestrator | + filename = "inventory.ci" 2026-02-04 01:19:33.710288 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710292 | orchestrator | } 2026-02-04 01:19:33.710353 | orchestrator | 2026-02-04 01:19:33.710364 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-04 01:19:33.710369 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-04 01:19:33.710372 | orchestrator | + content = (sensitive value) 2026-02-04 01:19:33.710376 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-04 01:19:33.710380 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-04 01:19:33.710384 | orchestrator | + content_md5 = (known after apply) 2026-02-04 01:19:33.710387 | orchestrator | + content_sha1 = (known after apply) 2026-02-04 01:19:33.710391 | orchestrator | + content_sha256 = (known after apply) 2026-02-04 01:19:33.710395 | orchestrator | + content_sha512 = (known after apply) 2026-02-04 01:19:33.710398 | orchestrator | + directory_permission = "0700" 2026-02-04 01:19:33.710402 | orchestrator | + file_permission = "0600" 2026-02-04 01:19:33.710406 | orchestrator | + filename = ".id_rsa.ci" 2026-02-04 01:19:33.710410 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710414 | orchestrator | } 2026-02-04 01:19:33.710432 | orchestrator | 2026-02-04 01:19:33.710443 | orchestrator | # null_resource.node_semaphore will be created 2026-02-04 01:19:33.710447 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-04 01:19:33.710451 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710455 | orchestrator | } 2026-02-04 01:19:33.710515 | orchestrator | 2026-02-04 01:19:33.710526 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-04 01:19:33.710530 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-04 01:19:33.710534 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.710538 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.710542 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710545 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.710549 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.710553 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-04 01:19:33.710557 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.710561 | orchestrator | + size = 80 2026-02-04 01:19:33.710564 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.710568 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.710572 | orchestrator | } 2026-02-04 01:19:33.710630 | orchestrator | 2026-02-04 01:19:33.710641 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-04 01:19:33.710646 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 01:19:33.710649 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.710653 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.710657 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710663 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.710667 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.710671 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-04 01:19:33.710675 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.710678 | orchestrator | + size = 80 2026-02-04 01:19:33.710682 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.710686 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.710689 | orchestrator | } 2026-02-04 01:19:33.710746 | orchestrator | 2026-02-04 01:19:33.710756 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-04 01:19:33.710761 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 01:19:33.710765 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.710769 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.710772 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710776 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.710780 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.710783 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-04 01:19:33.710787 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.710791 | orchestrator | + size = 80 2026-02-04 01:19:33.710807 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.710811 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.710815 | orchestrator | } 2026-02-04 01:19:33.710869 | orchestrator | 2026-02-04 01:19:33.710880 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-04 01:19:33.710885 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 01:19:33.710889 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.710892 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.710896 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.710900 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.710903 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.710907 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-04 01:19:33.710911 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.710915 | orchestrator | + size = 80 2026-02-04 01:19:33.710921 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.710925 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.710929 | orchestrator | } 2026-02-04 01:19:33.710986 | orchestrator | 2026-02-04 01:19:33.710998 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-04 01:19:33.711002 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 01:19:33.711006 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.711010 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.711013 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.711017 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.711021 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.711024 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-04 01:19:33.711028 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.711032 | orchestrator | + size = 80 2026-02-04 01:19:33.711035 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.711039 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.711043 | orchestrator | } 2026-02-04 01:19:33.711098 | orchestrator | 2026-02-04 01:19:33.711109 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-04 01:19:33.711114 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 01:19:33.711117 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.711121 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.711125 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.711132 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.711136 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.711140 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-04 01:19:33.711144 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.711148 | orchestrator | + size = 80 2026-02-04 01:19:33.711151 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.711155 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.711159 | orchestrator | } 2026-02-04 01:19:33.711236 | orchestrator | 2026-02-04 01:19:33.711256 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-04 01:19:33.711263 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-04 01:19:33.711270 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.711277 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.711282 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.711289 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.711295 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.711302 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-04 01:19:33.711308 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.711314 | orchestrator | + size = 80 2026-02-04 01:19:33.711321 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.711325 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.711328 | orchestrator | } 2026-02-04 01:19:33.711391 | orchestrator | 2026-02-04 01:19:33.711402 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-04 01:19:33.711407 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.711411 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.711415 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.711419 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.711423 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.711427 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-04 01:19:33.711430 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.711434 | orchestrator | + size = 20 2026-02-04 01:19:33.711438 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.711442 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.711446 | orchestrator | } 2026-02-04 01:19:33.711500 | orchestrator | 2026-02-04 01:19:33.711511 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-04 01:19:33.711516 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.711520 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.711523 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.711527 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.711531 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.711535 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-04 01:19:33.711539 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.711542 | orchestrator | + size = 20 2026-02-04 01:19:33.711546 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.711550 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.711554 | orchestrator | } 2026-02-04 01:19:33.711608 | orchestrator | 2026-02-04 01:19:33.711619 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-04 01:19:33.711624 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.711627 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.711631 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.711635 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.711639 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.711643 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-04 01:19:33.711646 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.711655 | orchestrator | + size = 20 2026-02-04 01:19:33.711659 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.711663 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.711667 | orchestrator | } 2026-02-04 01:19:33.711720 | orchestrator | 2026-02-04 01:19:33.711731 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-04 01:19:33.711735 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.711739 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.711743 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.711747 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.711754 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.711758 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-04 01:19:33.711762 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.711765 | orchestrator | + size = 20 2026-02-04 01:19:33.711769 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.711773 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.711777 | orchestrator | } 2026-02-04 01:19:33.711880 | orchestrator | 2026-02-04 01:19:33.711893 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-04 01:19:33.711898 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.711901 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.711905 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.711909 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.711913 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.711916 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-04 01:19:33.711920 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.711924 | orchestrator | + size = 20 2026-02-04 01:19:33.711928 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.711932 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.711935 | orchestrator | } 2026-02-04 01:19:33.711991 | orchestrator | 2026-02-04 01:19:33.712002 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-04 01:19:33.712007 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.712011 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.712014 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.712018 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.712022 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.712025 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-04 01:19:33.712029 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.712033 | orchestrator | + size = 20 2026-02-04 01:19:33.712036 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.712040 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.712044 | orchestrator | } 2026-02-04 01:19:33.712136 | orchestrator | 2026-02-04 01:19:33.712155 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-04 01:19:33.712160 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.712164 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.712167 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.712172 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.712175 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.712179 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-04 01:19:33.712183 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.712187 | orchestrator | + size = 20 2026-02-04 01:19:33.712190 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.712194 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.712198 | orchestrator | } 2026-02-04 01:19:33.712256 | orchestrator | 2026-02-04 01:19:33.712267 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-04 01:19:33.712272 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.712280 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.712284 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.712287 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.712291 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.712295 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-04 01:19:33.712299 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.712302 | orchestrator | + size = 20 2026-02-04 01:19:33.712306 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.712310 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.712313 | orchestrator | } 2026-02-04 01:19:33.712368 | orchestrator | 2026-02-04 01:19:33.712380 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-04 01:19:33.712384 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-04 01:19:33.712388 | orchestrator | + attachment = (known after apply) 2026-02-04 01:19:33.712391 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.712395 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.712399 | orchestrator | + metadata = (known after apply) 2026-02-04 01:19:33.712403 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-04 01:19:33.712410 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.712416 | orchestrator | + size = 20 2026-02-04 01:19:33.712423 | orchestrator | + volume_retype_policy = "never" 2026-02-04 01:19:33.712429 | orchestrator | + volume_type = "ssd" 2026-02-04 01:19:33.712436 | orchestrator | } 2026-02-04 01:19:33.712648 | orchestrator | 2026-02-04 01:19:33.712665 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-04 01:19:33.712669 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-04 01:19:33.712673 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 01:19:33.712677 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 01:19:33.712681 | orchestrator | + all_metadata = (known after apply) 2026-02-04 01:19:33.712684 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.712688 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.712692 | orchestrator | + config_drive = true 2026-02-04 01:19:33.712699 | orchestrator | + created = (known after apply) 2026-02-04 01:19:33.712703 | orchestrator | + flavor_id = (known after apply) 2026-02-04 01:19:33.712707 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-04 01:19:33.712711 | orchestrator | + force_delete = false 2026-02-04 01:19:33.712714 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 01:19:33.712718 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.712722 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.712725 | orchestrator | + image_name = (known after apply) 2026-02-04 01:19:33.712729 | orchestrator | + key_pair = "testbed" 2026-02-04 01:19:33.712733 | orchestrator | + name = "testbed-manager" 2026-02-04 01:19:33.712736 | orchestrator | + power_state = "active" 2026-02-04 01:19:33.712740 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.712744 | orchestrator | + security_groups = (known after apply) 2026-02-04 01:19:33.712748 | orchestrator | + stop_before_destroy = false 2026-02-04 01:19:33.712751 | orchestrator | + updated = (known after apply) 2026-02-04 01:19:33.712755 | orchestrator | + user_data = (sensitive value) 2026-02-04 01:19:33.712759 | orchestrator | 2026-02-04 01:19:33.712765 | orchestrator | + block_device { 2026-02-04 01:19:33.712772 | orchestrator | + boot_index = 0 2026-02-04 01:19:33.712778 | orchestrator | + delete_on_termination = false 2026-02-04 01:19:33.712784 | orchestrator | + destination_type = "volume" 2026-02-04 01:19:33.712790 | orchestrator | + multiattach = false 2026-02-04 01:19:33.712825 | orchestrator | + source_type = "volume" 2026-02-04 01:19:33.712833 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.712842 | orchestrator | } 2026-02-04 01:19:33.712846 | orchestrator | 2026-02-04 01:19:33.712850 | orchestrator | + network { 2026-02-04 01:19:33.712853 | orchestrator | + access_network = false 2026-02-04 01:19:33.712857 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 01:19:33.712861 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 01:19:33.712865 | orchestrator | + mac = (known after apply) 2026-02-04 01:19:33.712868 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.712872 | orchestrator | + port = (known after apply) 2026-02-04 01:19:33.712876 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.712880 | orchestrator | } 2026-02-04 01:19:33.712883 | orchestrator | } 2026-02-04 01:19:33.713073 | orchestrator | 2026-02-04 01:19:33.713085 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-04 01:19:33.713089 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 01:19:33.713093 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 01:19:33.713097 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 01:19:33.713103 | orchestrator | + all_metadata = (known after apply) 2026-02-04 01:19:33.713109 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.713115 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.713121 | orchestrator | + config_drive = true 2026-02-04 01:19:33.713128 | orchestrator | + created = (known after apply) 2026-02-04 01:19:33.713134 | orchestrator | + flavor_id = (known after apply) 2026-02-04 01:19:33.713140 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 01:19:33.713147 | orchestrator | + force_delete = false 2026-02-04 01:19:33.713153 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 01:19:33.713160 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.713165 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.713169 | orchestrator | + image_name = (known after apply) 2026-02-04 01:19:33.713173 | orchestrator | + key_pair = "testbed" 2026-02-04 01:19:33.713177 | orchestrator | + name = "testbed-node-0" 2026-02-04 01:19:33.713180 | orchestrator | + power_state = "active" 2026-02-04 01:19:33.713185 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.713188 | orchestrator | + security_groups = (known after apply) 2026-02-04 01:19:33.713192 | orchestrator | + stop_before_destroy = false 2026-02-04 01:19:33.713196 | orchestrator | + updated = (known after apply) 2026-02-04 01:19:33.713200 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 01:19:33.713204 | orchestrator | 2026-02-04 01:19:33.713207 | orchestrator | + block_device { 2026-02-04 01:19:33.713211 | orchestrator | + boot_index = 0 2026-02-04 01:19:33.713215 | orchestrator | + delete_on_termination = false 2026-02-04 01:19:33.713219 | orchestrator | + destination_type = "volume" 2026-02-04 01:19:33.713222 | orchestrator | + multiattach = false 2026-02-04 01:19:33.713226 | orchestrator | + source_type = "volume" 2026-02-04 01:19:33.713230 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.713234 | orchestrator | } 2026-02-04 01:19:33.713237 | orchestrator | 2026-02-04 01:19:33.713241 | orchestrator | + network { 2026-02-04 01:19:33.713245 | orchestrator | + access_network = false 2026-02-04 01:19:33.713249 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 01:19:33.713252 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 01:19:33.713256 | orchestrator | + mac = (known after apply) 2026-02-04 01:19:33.713260 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.713264 | orchestrator | + port = (known after apply) 2026-02-04 01:19:33.713267 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.713271 | orchestrator | } 2026-02-04 01:19:33.713275 | orchestrator | } 2026-02-04 01:19:33.713464 | orchestrator | 2026-02-04 01:19:33.713483 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-04 01:19:33.713491 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 01:19:33.713495 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 01:19:33.713503 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 01:19:33.713507 | orchestrator | + all_metadata = (known after apply) 2026-02-04 01:19:33.713511 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.713515 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.713519 | orchestrator | + config_drive = true 2026-02-04 01:19:33.713522 | orchestrator | + created = (known after apply) 2026-02-04 01:19:33.713526 | orchestrator | + flavor_id = (known after apply) 2026-02-04 01:19:33.713530 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 01:19:33.713534 | orchestrator | + force_delete = false 2026-02-04 01:19:33.713538 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 01:19:33.713541 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.713545 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.713549 | orchestrator | + image_name = (known after apply) 2026-02-04 01:19:33.713553 | orchestrator | + key_pair = "testbed" 2026-02-04 01:19:33.713556 | orchestrator | + name = "testbed-node-1" 2026-02-04 01:19:33.713560 | orchestrator | + power_state = "active" 2026-02-04 01:19:33.713564 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.713568 | orchestrator | + security_groups = (known after apply) 2026-02-04 01:19:33.713572 | orchestrator | + stop_before_destroy = false 2026-02-04 01:19:33.713576 | orchestrator | + updated = (known after apply) 2026-02-04 01:19:33.713583 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 01:19:33.713586 | orchestrator | 2026-02-04 01:19:33.713590 | orchestrator | + block_device { 2026-02-04 01:19:33.713594 | orchestrator | + boot_index = 0 2026-02-04 01:19:33.713598 | orchestrator | + delete_on_termination = false 2026-02-04 01:19:33.713602 | orchestrator | + destination_type = "volume" 2026-02-04 01:19:33.713605 | orchestrator | + multiattach = false 2026-02-04 01:19:33.713609 | orchestrator | + source_type = "volume" 2026-02-04 01:19:33.713613 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.713617 | orchestrator | } 2026-02-04 01:19:33.713621 | orchestrator | 2026-02-04 01:19:33.713624 | orchestrator | + network { 2026-02-04 01:19:33.713628 | orchestrator | + access_network = false 2026-02-04 01:19:33.713632 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 01:19:33.713636 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 01:19:33.713640 | orchestrator | + mac = (known after apply) 2026-02-04 01:19:33.713643 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.713647 | orchestrator | + port = (known after apply) 2026-02-04 01:19:33.713651 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.713655 | orchestrator | } 2026-02-04 01:19:33.713659 | orchestrator | } 2026-02-04 01:19:33.713883 | orchestrator | 2026-02-04 01:19:33.713898 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-04 01:19:33.713903 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 01:19:33.713906 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 01:19:33.713910 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 01:19:33.713915 | orchestrator | + all_metadata = (known after apply) 2026-02-04 01:19:33.713919 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.713923 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.713927 | orchestrator | + config_drive = true 2026-02-04 01:19:33.713930 | orchestrator | + created = (known after apply) 2026-02-04 01:19:33.713934 | orchestrator | + flavor_id = (known after apply) 2026-02-04 01:19:33.713938 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 01:19:33.713942 | orchestrator | + force_delete = false 2026-02-04 01:19:33.713945 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 01:19:33.713949 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.713953 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.713960 | orchestrator | + image_name = (known after apply) 2026-02-04 01:19:33.713964 | orchestrator | + key_pair = "testbed" 2026-02-04 01:19:33.713968 | orchestrator | + name = "testbed-node-2" 2026-02-04 01:19:33.713972 | orchestrator | + power_state = "active" 2026-02-04 01:19:33.713975 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.713979 | orchestrator | + security_groups = (known after apply) 2026-02-04 01:19:33.713983 | orchestrator | + stop_before_destroy = false 2026-02-04 01:19:33.713986 | orchestrator | + updated = (known after apply) 2026-02-04 01:19:33.713990 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 01:19:33.713994 | orchestrator | 2026-02-04 01:19:33.713998 | orchestrator | + block_device { 2026-02-04 01:19:33.714001 | orchestrator | + boot_index = 0 2026-02-04 01:19:33.714005 | orchestrator | + delete_on_termination = false 2026-02-04 01:19:33.714009 | orchestrator | + destination_type = "volume" 2026-02-04 01:19:33.714025 | orchestrator | + multiattach = false 2026-02-04 01:19:33.714030 | orchestrator | + source_type = "volume" 2026-02-04 01:19:33.714034 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.714038 | orchestrator | } 2026-02-04 01:19:33.714042 | orchestrator | 2026-02-04 01:19:33.714045 | orchestrator | + network { 2026-02-04 01:19:33.714049 | orchestrator | + access_network = false 2026-02-04 01:19:33.714053 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 01:19:33.714057 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 01:19:33.714060 | orchestrator | + mac = (known after apply) 2026-02-04 01:19:33.714064 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.714068 | orchestrator | + port = (known after apply) 2026-02-04 01:19:33.714072 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.714075 | orchestrator | } 2026-02-04 01:19:33.714079 | orchestrator | } 2026-02-04 01:19:33.714286 | orchestrator | 2026-02-04 01:19:33.714307 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-04 01:19:33.714312 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 01:19:33.714316 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 01:19:33.714330 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 01:19:33.714334 | orchestrator | + all_metadata = (known after apply) 2026-02-04 01:19:33.714337 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.714341 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.714345 | orchestrator | + config_drive = true 2026-02-04 01:19:33.714349 | orchestrator | + created = (known after apply) 2026-02-04 01:19:33.714352 | orchestrator | + flavor_id = (known after apply) 2026-02-04 01:19:33.714356 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 01:19:33.714360 | orchestrator | + force_delete = false 2026-02-04 01:19:33.714364 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 01:19:33.714368 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.714371 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.714375 | orchestrator | + image_name = (known after apply) 2026-02-04 01:19:33.714379 | orchestrator | + key_pair = "testbed" 2026-02-04 01:19:33.714383 | orchestrator | + name = "testbed-node-3" 2026-02-04 01:19:33.714386 | orchestrator | + power_state = "active" 2026-02-04 01:19:33.714390 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.714394 | orchestrator | + security_groups = (known after apply) 2026-02-04 01:19:33.714398 | orchestrator | + stop_before_destroy = false 2026-02-04 01:19:33.714401 | orchestrator | + updated = (known after apply) 2026-02-04 01:19:33.714405 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 01:19:33.714409 | orchestrator | 2026-02-04 01:19:33.714413 | orchestrator | + block_device { 2026-02-04 01:19:33.714417 | orchestrator | + boot_index = 0 2026-02-04 01:19:33.714420 | orchestrator | + delete_on_termination = false 2026-02-04 01:19:33.714424 | orchestrator | + destination_type = "volume" 2026-02-04 01:19:33.714431 | orchestrator | + multiattach = false 2026-02-04 01:19:33.714435 | orchestrator | + source_type = "volume" 2026-02-04 01:19:33.714439 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.714445 | orchestrator | } 2026-02-04 01:19:33.714452 | orchestrator | 2026-02-04 01:19:33.714457 | orchestrator | + network { 2026-02-04 01:19:33.714463 | orchestrator | + access_network = false 2026-02-04 01:19:33.714469 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 01:19:33.714476 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 01:19:33.714482 | orchestrator | + mac = (known after apply) 2026-02-04 01:19:33.714489 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.714495 | orchestrator | + port = (known after apply) 2026-02-04 01:19:33.714501 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.714506 | orchestrator | } 2026-02-04 01:19:33.714510 | orchestrator | } 2026-02-04 01:19:33.714704 | orchestrator | 2026-02-04 01:19:33.714717 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-04 01:19:33.714721 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 01:19:33.714725 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 01:19:33.714729 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 01:19:33.714733 | orchestrator | + all_metadata = (known after apply) 2026-02-04 01:19:33.714737 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.714740 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.714744 | orchestrator | + config_drive = true 2026-02-04 01:19:33.714748 | orchestrator | + created = (known after apply) 2026-02-04 01:19:33.714752 | orchestrator | + flavor_id = (known after apply) 2026-02-04 01:19:33.714755 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 01:19:33.714759 | orchestrator | + force_delete = false 2026-02-04 01:19:33.714763 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 01:19:33.714767 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.714771 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.714777 | orchestrator | + image_name = (known after apply) 2026-02-04 01:19:33.714783 | orchestrator | + key_pair = "testbed" 2026-02-04 01:19:33.714790 | orchestrator | + name = "testbed-node-4" 2026-02-04 01:19:33.714805 | orchestrator | + power_state = "active" 2026-02-04 01:19:33.714811 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.714817 | orchestrator | + security_groups = (known after apply) 2026-02-04 01:19:33.714823 | orchestrator | + stop_before_destroy = false 2026-02-04 01:19:33.714829 | orchestrator | + updated = (known after apply) 2026-02-04 01:19:33.714837 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 01:19:33.714843 | orchestrator | 2026-02-04 01:19:33.714848 | orchestrator | + block_device { 2026-02-04 01:19:33.714854 | orchestrator | + boot_index = 0 2026-02-04 01:19:33.714858 | orchestrator | + delete_on_termination = false 2026-02-04 01:19:33.714861 | orchestrator | + destination_type = "volume" 2026-02-04 01:19:33.714865 | orchestrator | + multiattach = false 2026-02-04 01:19:33.714869 | orchestrator | + source_type = "volume" 2026-02-04 01:19:33.714873 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.714876 | orchestrator | } 2026-02-04 01:19:33.714880 | orchestrator | 2026-02-04 01:19:33.714884 | orchestrator | + network { 2026-02-04 01:19:33.714888 | orchestrator | + access_network = false 2026-02-04 01:19:33.714892 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 01:19:33.714895 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 01:19:33.714899 | orchestrator | + mac = (known after apply) 2026-02-04 01:19:33.714903 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.714907 | orchestrator | + port = (known after apply) 2026-02-04 01:19:33.714910 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.714914 | orchestrator | } 2026-02-04 01:19:33.714918 | orchestrator | } 2026-02-04 01:19:33.715151 | orchestrator | 2026-02-04 01:19:33.715174 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-04 01:19:33.715180 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-04 01:19:33.715186 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-04 01:19:33.715193 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-04 01:19:33.715199 | orchestrator | + all_metadata = (known after apply) 2026-02-04 01:19:33.715205 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.715212 | orchestrator | + availability_zone = "nova" 2026-02-04 01:19:33.715218 | orchestrator | + config_drive = true 2026-02-04 01:19:33.715224 | orchestrator | + created = (known after apply) 2026-02-04 01:19:33.715230 | orchestrator | + flavor_id = (known after apply) 2026-02-04 01:19:33.715236 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-04 01:19:33.715242 | orchestrator | + force_delete = false 2026-02-04 01:19:33.715248 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-04 01:19:33.715254 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.715260 | orchestrator | + image_id = (known after apply) 2026-02-04 01:19:33.715266 | orchestrator | + image_name = (known after apply) 2026-02-04 01:19:33.715272 | orchestrator | + key_pair = "testbed" 2026-02-04 01:19:33.715279 | orchestrator | + name = "testbed-node-5" 2026-02-04 01:19:33.715285 | orchestrator | + power_state = "active" 2026-02-04 01:19:33.715291 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.715298 | orchestrator | + security_groups = (known after apply) 2026-02-04 01:19:33.715304 | orchestrator | + stop_before_destroy = false 2026-02-04 01:19:33.715310 | orchestrator | + updated = (known after apply) 2026-02-04 01:19:33.715316 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-04 01:19:33.715322 | orchestrator | 2026-02-04 01:19:33.715329 | orchestrator | + block_device { 2026-02-04 01:19:33.715335 | orchestrator | + boot_index = 0 2026-02-04 01:19:33.715342 | orchestrator | + delete_on_termination = false 2026-02-04 01:19:33.715348 | orchestrator | + destination_type = "volume" 2026-02-04 01:19:33.715354 | orchestrator | + multiattach = false 2026-02-04 01:19:33.715361 | orchestrator | + source_type = "volume" 2026-02-04 01:19:33.715367 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.715374 | orchestrator | } 2026-02-04 01:19:33.715378 | orchestrator | 2026-02-04 01:19:33.715382 | orchestrator | + network { 2026-02-04 01:19:33.715387 | orchestrator | + access_network = false 2026-02-04 01:19:33.715393 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-04 01:19:33.715399 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-04 01:19:33.715406 | orchestrator | + mac = (known after apply) 2026-02-04 01:19:33.715412 | orchestrator | + name = (known after apply) 2026-02-04 01:19:33.715418 | orchestrator | + port = (known after apply) 2026-02-04 01:19:33.715425 | orchestrator | + uuid = (known after apply) 2026-02-04 01:19:33.715431 | orchestrator | } 2026-02-04 01:19:33.715437 | orchestrator | } 2026-02-04 01:19:33.715525 | orchestrator | 2026-02-04 01:19:33.715546 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-04 01:19:33.715553 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-04 01:19:33.715560 | orchestrator | + fingerprint = (known after apply) 2026-02-04 01:19:33.715566 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.715572 | orchestrator | + name = "testbed" 2026-02-04 01:19:33.715576 | orchestrator | + private_key = (sensitive value) 2026-02-04 01:19:33.715580 | orchestrator | + public_key = (known after apply) 2026-02-04 01:19:33.715584 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.715587 | orchestrator | + user_id = (known after apply) 2026-02-04 01:19:33.715591 | orchestrator | } 2026-02-04 01:19:33.715636 | orchestrator | 2026-02-04 01:19:33.715648 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-04 01:19:33.715652 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.715664 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.715668 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.715672 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.715676 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.715684 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.715688 | orchestrator | } 2026-02-04 01:19:33.715725 | orchestrator | 2026-02-04 01:19:33.715736 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-04 01:19:33.715740 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.715744 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.715748 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.715752 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.715756 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.715760 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.715763 | orchestrator | } 2026-02-04 01:19:33.715834 | orchestrator | 2026-02-04 01:19:33.715852 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-04 01:19:33.715860 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.715866 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.715872 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.715878 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.715885 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.715891 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.715899 | orchestrator | } 2026-02-04 01:19:33.715939 | orchestrator | 2026-02-04 01:19:33.715951 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-04 01:19:33.715955 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.715959 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.715963 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.715967 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.715971 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.715974 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.715978 | orchestrator | } 2026-02-04 01:19:33.716011 | orchestrator | 2026-02-04 01:19:33.716022 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-04 01:19:33.716026 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.716030 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.716034 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.716038 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.716042 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.716046 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.716049 | orchestrator | } 2026-02-04 01:19:33.716082 | orchestrator | 2026-02-04 01:19:33.716092 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-04 01:19:33.716097 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.716101 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.716104 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.716108 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.716112 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.716116 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.716120 | orchestrator | } 2026-02-04 01:19:33.716159 | orchestrator | 2026-02-04 01:19:33.716174 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-04 01:19:33.716181 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.716187 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.716194 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.716200 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.716207 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.716221 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.716225 | orchestrator | } 2026-02-04 01:19:33.716265 | orchestrator | 2026-02-04 01:19:33.716276 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-04 01:19:33.716280 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.716284 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.716288 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.716292 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.716296 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.716300 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.716303 | orchestrator | } 2026-02-04 01:19:33.716340 | orchestrator | 2026-02-04 01:19:33.716354 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-04 01:19:33.716360 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-04 01:19:33.716367 | orchestrator | + device = (known after apply) 2026-02-04 01:19:33.716373 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.716378 | orchestrator | + instance_id = (known after apply) 2026-02-04 01:19:33.716384 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.716390 | orchestrator | + volume_id = (known after apply) 2026-02-04 01:19:33.716396 | orchestrator | } 2026-02-04 01:19:33.716458 | orchestrator | 2026-02-04 01:19:33.716478 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-04 01:19:33.716483 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-04 01:19:33.716487 | orchestrator | + fixed_ip = (known after apply) 2026-02-04 01:19:33.716491 | orchestrator | + floating_ip = (known after apply) 2026-02-04 01:19:33.716495 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.716518 | orchestrator | + port_id = (known after apply) 2026-02-04 01:19:33.716526 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.716538 | orchestrator | } 2026-02-04 01:19:33.716634 | orchestrator | 2026-02-04 01:19:33.716647 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-04 01:19:33.716652 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-04 01:19:33.716656 | orchestrator | + address = (known after apply) 2026-02-04 01:19:33.716660 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.716668 | orchestrator | + dns_domain = (known after apply) 2026-02-04 01:19:33.716672 | orchestrator | + dns_name = (known after apply) 2026-02-04 01:19:33.716676 | orchestrator | + fixed_ip = (known after apply) 2026-02-04 01:19:33.716680 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.716684 | orchestrator | + pool = "public" 2026-02-04 01:19:33.716688 | orchestrator | + port_id = (known after apply) 2026-02-04 01:19:33.716692 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.716696 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.716700 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.716703 | orchestrator | } 2026-02-04 01:19:33.716788 | orchestrator | 2026-02-04 01:19:33.716814 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-04 01:19:33.716819 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-04 01:19:33.716824 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.716828 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.716831 | orchestrator | + availability_zone_hints = [ 2026-02-04 01:19:33.716835 | orchestrator | + "nova", 2026-02-04 01:19:33.716839 | orchestrator | ] 2026-02-04 01:19:33.716846 | orchestrator | + dns_domain = (known after apply) 2026-02-04 01:19:33.716853 | orchestrator | + external = (known after apply) 2026-02-04 01:19:33.716860 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.716866 | orchestrator | + mtu = (known after apply) 2026-02-04 01:19:33.716873 | orchestrator | + name = "net-testbed-management" 2026-02-04 01:19:33.716880 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 01:19:33.716894 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 01:19:33.716901 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.716908 | orchestrator | + shared = (known after apply) 2026-02-04 01:19:33.716912 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.716916 | orchestrator | + transparent_vlan = (known after apply) 2026-02-04 01:19:33.716920 | orchestrator | 2026-02-04 01:19:33.716924 | orchestrator | + segments (known after apply) 2026-02-04 01:19:33.716928 | orchestrator | } 2026-02-04 01:19:33.717065 | orchestrator | 2026-02-04 01:19:33.717085 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-04 01:19:33.717093 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-04 01:19:33.717099 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.717106 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 01:19:33.717110 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 01:19:33.717114 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.717118 | orchestrator | + device_id = (known after apply) 2026-02-04 01:19:33.717121 | orchestrator | + device_owner = (known after apply) 2026-02-04 01:19:33.717125 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 01:19:33.717129 | orchestrator | + dns_name = (known after apply) 2026-02-04 01:19:33.717133 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.717136 | orchestrator | + mac_address = (known after apply) 2026-02-04 01:19:33.717140 | orchestrator | + network_id = (known after apply) 2026-02-04 01:19:33.717144 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 01:19:33.717148 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 01:19:33.717151 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.717155 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 01:19:33.717159 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.717163 | orchestrator | 2026-02-04 01:19:33.717166 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.717170 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 01:19:33.717175 | orchestrator | } 2026-02-04 01:19:33.717182 | orchestrator | 2026-02-04 01:19:33.717188 | orchestrator | + binding (known after apply) 2026-02-04 01:19:33.717194 | orchestrator | 2026-02-04 01:19:33.717200 | orchestrator | + fixed_ip { 2026-02-04 01:19:33.717206 | orchestrator | + ip_address = "192.168.16.5" 2026-02-04 01:19:33.717224 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.717230 | orchestrator | } 2026-02-04 01:19:33.717237 | orchestrator | } 2026-02-04 01:19:33.717377 | orchestrator | 2026-02-04 01:19:33.717389 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-04 01:19:33.717394 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 01:19:33.717397 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.717401 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 01:19:33.717406 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 01:19:33.717420 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.717426 | orchestrator | + device_id = (known after apply) 2026-02-04 01:19:33.717433 | orchestrator | + device_owner = (known after apply) 2026-02-04 01:19:33.717439 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 01:19:33.717445 | orchestrator | + dns_name = (known after apply) 2026-02-04 01:19:33.717451 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.717457 | orchestrator | + mac_address = (known after apply) 2026-02-04 01:19:33.717463 | orchestrator | + network_id = (known after apply) 2026-02-04 01:19:33.717468 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 01:19:33.717474 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 01:19:33.717480 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.717494 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 01:19:33.717500 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.717506 | orchestrator | 2026-02-04 01:19:33.717513 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.717520 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 01:19:33.717526 | orchestrator | } 2026-02-04 01:19:33.717532 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.717540 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 01:19:33.717544 | orchestrator | } 2026-02-04 01:19:33.717548 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.717551 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 01:19:33.717555 | orchestrator | } 2026-02-04 01:19:33.717559 | orchestrator | 2026-02-04 01:19:33.717563 | orchestrator | + binding (known after apply) 2026-02-04 01:19:33.717567 | orchestrator | 2026-02-04 01:19:33.717570 | orchestrator | + fixed_ip { 2026-02-04 01:19:33.717574 | orchestrator | + ip_address = "192.168.16.10" 2026-02-04 01:19:33.717578 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.717582 | orchestrator | } 2026-02-04 01:19:33.717586 | orchestrator | } 2026-02-04 01:19:33.717729 | orchestrator | 2026-02-04 01:19:33.717740 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-04 01:19:33.717745 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 01:19:33.717753 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.717757 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 01:19:33.717760 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 01:19:33.717764 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.717768 | orchestrator | + device_id = (known after apply) 2026-02-04 01:19:33.717772 | orchestrator | + device_owner = (known after apply) 2026-02-04 01:19:33.717776 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 01:19:33.717780 | orchestrator | + dns_name = (known after apply) 2026-02-04 01:19:33.717783 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.717787 | orchestrator | + mac_address = (known after apply) 2026-02-04 01:19:33.717791 | orchestrator | + network_id = (known after apply) 2026-02-04 01:19:33.717809 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 01:19:33.717813 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 01:19:33.717817 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.717820 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 01:19:33.717824 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.717828 | orchestrator | 2026-02-04 01:19:33.717832 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.717836 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 01:19:33.717839 | orchestrator | } 2026-02-04 01:19:33.717843 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.717847 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 01:19:33.717851 | orchestrator | } 2026-02-04 01:19:33.717855 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.717859 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 01:19:33.717862 | orchestrator | } 2026-02-04 01:19:33.717866 | orchestrator | 2026-02-04 01:19:33.717870 | orchestrator | + binding (known after apply) 2026-02-04 01:19:33.717874 | orchestrator | 2026-02-04 01:19:33.717878 | orchestrator | + fixed_ip { 2026-02-04 01:19:33.717882 | orchestrator | + ip_address = "192.168.16.11" 2026-02-04 01:19:33.717885 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.717889 | orchestrator | } 2026-02-04 01:19:33.717893 | orchestrator | } 2026-02-04 01:19:33.718041 | orchestrator | 2026-02-04 01:19:33.718054 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-04 01:19:33.718058 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 01:19:33.718062 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.718066 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 01:19:33.718070 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 01:19:33.718081 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.718094 | orchestrator | + device_id = (known after apply) 2026-02-04 01:19:33.718098 | orchestrator | + device_owner = (known after apply) 2026-02-04 01:19:33.718102 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 01:19:33.718106 | orchestrator | + dns_name = (known after apply) 2026-02-04 01:19:33.718109 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.718113 | orchestrator | + mac_address = (known after apply) 2026-02-04 01:19:33.718117 | orchestrator | + network_id = (known after apply) 2026-02-04 01:19:33.718121 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 01:19:33.718124 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 01:19:33.718128 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.718132 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 01:19:33.718136 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.718140 | orchestrator | 2026-02-04 01:19:33.718144 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718147 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 01:19:33.718151 | orchestrator | } 2026-02-04 01:19:33.718155 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718159 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 01:19:33.718162 | orchestrator | } 2026-02-04 01:19:33.718166 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718170 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 01:19:33.718174 | orchestrator | } 2026-02-04 01:19:33.718178 | orchestrator | 2026-02-04 01:19:33.718181 | orchestrator | + binding (known after apply) 2026-02-04 01:19:33.718185 | orchestrator | 2026-02-04 01:19:33.718189 | orchestrator | + fixed_ip { 2026-02-04 01:19:33.718193 | orchestrator | + ip_address = "192.168.16.12" 2026-02-04 01:19:33.718197 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.718200 | orchestrator | } 2026-02-04 01:19:33.718204 | orchestrator | } 2026-02-04 01:19:33.718338 | orchestrator | 2026-02-04 01:19:33.718350 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-04 01:19:33.718355 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 01:19:33.718358 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.718362 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 01:19:33.718366 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 01:19:33.718370 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.718374 | orchestrator | + device_id = (known after apply) 2026-02-04 01:19:33.718377 | orchestrator | + device_owner = (known after apply) 2026-02-04 01:19:33.718381 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 01:19:33.718385 | orchestrator | + dns_name = (known after apply) 2026-02-04 01:19:33.718389 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.718393 | orchestrator | + mac_address = (known after apply) 2026-02-04 01:19:33.718396 | orchestrator | + network_id = (known after apply) 2026-02-04 01:19:33.718400 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 01:19:33.718404 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 01:19:33.718408 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.718411 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 01:19:33.718415 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.718419 | orchestrator | 2026-02-04 01:19:33.718423 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718426 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 01:19:33.718430 | orchestrator | } 2026-02-04 01:19:33.718437 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718443 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 01:19:33.718450 | orchestrator | } 2026-02-04 01:19:33.718457 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718462 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 01:19:33.718466 | orchestrator | } 2026-02-04 01:19:33.718470 | orchestrator | 2026-02-04 01:19:33.718478 | orchestrator | + binding (known after apply) 2026-02-04 01:19:33.718482 | orchestrator | 2026-02-04 01:19:33.718486 | orchestrator | + fixed_ip { 2026-02-04 01:19:33.718490 | orchestrator | + ip_address = "192.168.16.13" 2026-02-04 01:19:33.718494 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.718497 | orchestrator | } 2026-02-04 01:19:33.718501 | orchestrator | } 2026-02-04 01:19:33.718675 | orchestrator | 2026-02-04 01:19:33.718691 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-04 01:19:33.718696 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 01:19:33.718700 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.718704 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 01:19:33.718707 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 01:19:33.718711 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.718715 | orchestrator | + device_id = (known after apply) 2026-02-04 01:19:33.718719 | orchestrator | + device_owner = (known after apply) 2026-02-04 01:19:33.718723 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 01:19:33.718727 | orchestrator | + dns_name = (known after apply) 2026-02-04 01:19:33.718734 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.718738 | orchestrator | + mac_address = (known after apply) 2026-02-04 01:19:33.718741 | orchestrator | + network_id = (known after apply) 2026-02-04 01:19:33.718745 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 01:19:33.718749 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 01:19:33.718753 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.718757 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 01:19:33.718760 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.718766 | orchestrator | 2026-02-04 01:19:33.718770 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718778 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 01:19:33.718782 | orchestrator | } 2026-02-04 01:19:33.718786 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718790 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 01:19:33.718805 | orchestrator | } 2026-02-04 01:19:33.718809 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.718813 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 01:19:33.718817 | orchestrator | } 2026-02-04 01:19:33.718821 | orchestrator | 2026-02-04 01:19:33.718824 | orchestrator | + binding (known after apply) 2026-02-04 01:19:33.718828 | orchestrator | 2026-02-04 01:19:33.718832 | orchestrator | + fixed_ip { 2026-02-04 01:19:33.718836 | orchestrator | + ip_address = "192.168.16.14" 2026-02-04 01:19:33.718840 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.718844 | orchestrator | } 2026-02-04 01:19:33.718847 | orchestrator | } 2026-02-04 01:19:33.719015 | orchestrator | 2026-02-04 01:19:33.719029 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-04 01:19:33.719034 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-04 01:19:33.719038 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.719042 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-04 01:19:33.719045 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-04 01:19:33.719049 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.719053 | orchestrator | + device_id = (known after apply) 2026-02-04 01:19:33.719057 | orchestrator | + device_owner = (known after apply) 2026-02-04 01:19:33.719061 | orchestrator | + dns_assignment = (known after apply) 2026-02-04 01:19:33.719064 | orchestrator | + dns_name = (known after apply) 2026-02-04 01:19:33.719068 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.719072 | orchestrator | + mac_address = (known after apply) 2026-02-04 01:19:33.719076 | orchestrator | + network_id = (known after apply) 2026-02-04 01:19:33.719079 | orchestrator | + port_security_enabled = (known after apply) 2026-02-04 01:19:33.719083 | orchestrator | + qos_policy_id = (known after apply) 2026-02-04 01:19:33.719091 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.719095 | orchestrator | + security_group_ids = (known after apply) 2026-02-04 01:19:33.719099 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.719102 | orchestrator | 2026-02-04 01:19:33.719106 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.719110 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-04 01:19:33.719114 | orchestrator | } 2026-02-04 01:19:33.719118 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.719122 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-04 01:19:33.719125 | orchestrator | } 2026-02-04 01:19:33.719129 | orchestrator | + allowed_address_pairs { 2026-02-04 01:19:33.719133 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-04 01:19:33.719137 | orchestrator | } 2026-02-04 01:19:33.719141 | orchestrator | 2026-02-04 01:19:33.719144 | orchestrator | + binding (known after apply) 2026-02-04 01:19:33.719148 | orchestrator | 2026-02-04 01:19:33.719152 | orchestrator | + fixed_ip { 2026-02-04 01:19:33.719156 | orchestrator | + ip_address = "192.168.16.15" 2026-02-04 01:19:33.719160 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.719163 | orchestrator | } 2026-02-04 01:19:33.719167 | orchestrator | } 2026-02-04 01:19:33.719219 | orchestrator | 2026-02-04 01:19:33.719237 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-04 01:19:33.719244 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-04 01:19:33.719250 | orchestrator | + force_destroy = false 2026-02-04 01:19:33.719257 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.719263 | orchestrator | + port_id = (known after apply) 2026-02-04 01:19:33.719269 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.719275 | orchestrator | + router_id = (known after apply) 2026-02-04 01:19:33.719281 | orchestrator | + subnet_id = (known after apply) 2026-02-04 01:19:33.719288 | orchestrator | } 2026-02-04 01:19:33.719382 | orchestrator | 2026-02-04 01:19:33.719397 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-04 01:19:33.719402 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-04 01:19:33.719405 | orchestrator | + admin_state_up = (known after apply) 2026-02-04 01:19:33.719409 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.719413 | orchestrator | + availability_zone_hints = [ 2026-02-04 01:19:33.719417 | orchestrator | + "nova", 2026-02-04 01:19:33.719421 | orchestrator | ] 2026-02-04 01:19:33.719425 | orchestrator | + distributed = (known after apply) 2026-02-04 01:19:33.719429 | orchestrator | + enable_snat = (known after apply) 2026-02-04 01:19:33.719433 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-04 01:19:33.719436 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-04 01:19:33.719440 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.719446 | orchestrator | + name = "testbed" 2026-02-04 01:19:33.719453 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.719460 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.719466 | orchestrator | 2026-02-04 01:19:33.719473 | orchestrator | + external_fixed_ip (known after apply) 2026-02-04 01:19:33.719479 | orchestrator | } 2026-02-04 01:19:33.719572 | orchestrator | 2026-02-04 01:19:33.719585 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-04 01:19:33.719590 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-04 01:19:33.719594 | orchestrator | + description = "ssh" 2026-02-04 01:19:33.719598 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.719602 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.719606 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.719609 | orchestrator | + port_range_max = 22 2026-02-04 01:19:33.719613 | orchestrator | + port_range_min = 22 2026-02-04 01:19:33.719617 | orchestrator | + protocol = "tcp" 2026-02-04 01:19:33.719621 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.719629 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.719633 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.719637 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 01:19:33.719641 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.719645 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.719648 | orchestrator | } 2026-02-04 01:19:33.719721 | orchestrator | 2026-02-04 01:19:33.719732 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-04 01:19:33.719737 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-04 01:19:33.719741 | orchestrator | + description = "wireguard" 2026-02-04 01:19:33.719744 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.719748 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.719752 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.719756 | orchestrator | + port_range_max = 51820 2026-02-04 01:19:33.719760 | orchestrator | + port_range_min = 51820 2026-02-04 01:19:33.719763 | orchestrator | + protocol = "udp" 2026-02-04 01:19:33.719767 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.719771 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.719775 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.719778 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 01:19:33.719782 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.719786 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.719790 | orchestrator | } 2026-02-04 01:19:33.719896 | orchestrator | 2026-02-04 01:19:33.719909 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-04 01:19:33.719913 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-04 01:19:33.719921 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.719925 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.719928 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.719932 | orchestrator | + protocol = "tcp" 2026-02-04 01:19:33.719936 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.719940 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.719944 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.719947 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-04 01:19:33.719951 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.719955 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.719959 | orchestrator | } 2026-02-04 01:19:33.720020 | orchestrator | 2026-02-04 01:19:33.720031 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-04 01:19:33.720036 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-04 01:19:33.720040 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.720043 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.720047 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.720051 | orchestrator | + protocol = "udp" 2026-02-04 01:19:33.720055 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.720059 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.720062 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.720066 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-04 01:19:33.720070 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.720074 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.720077 | orchestrator | } 2026-02-04 01:19:33.720136 | orchestrator | 2026-02-04 01:19:33.720147 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-04 01:19:33.720156 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-04 01:19:33.720160 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.720164 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.720168 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.720171 | orchestrator | + protocol = "icmp" 2026-02-04 01:19:33.720175 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.720179 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.720183 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.720187 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 01:19:33.720190 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.720194 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.720198 | orchestrator | } 2026-02-04 01:19:33.720257 | orchestrator | 2026-02-04 01:19:33.720268 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-04 01:19:33.720272 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-04 01:19:33.720276 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.720280 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.720284 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.720288 | orchestrator | + protocol = "tcp" 2026-02-04 01:19:33.720292 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.720296 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.720299 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.720303 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 01:19:33.720307 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.720311 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.720314 | orchestrator | } 2026-02-04 01:19:33.720372 | orchestrator | 2026-02-04 01:19:33.720383 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-04 01:19:33.720388 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-04 01:19:33.720392 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.720395 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.720399 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.720403 | orchestrator | + protocol = "udp" 2026-02-04 01:19:33.720407 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.720411 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.720414 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.720418 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 01:19:33.720422 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.720426 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.720430 | orchestrator | } 2026-02-04 01:19:33.720487 | orchestrator | 2026-02-04 01:19:33.720498 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-04 01:19:33.720503 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-04 01:19:33.720506 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.720510 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.720514 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.720519 | orchestrator | + protocol = "icmp" 2026-02-04 01:19:33.720525 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.720533 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.720539 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.720545 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 01:19:33.720551 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.720557 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.720568 | orchestrator | } 2026-02-04 01:19:33.720647 | orchestrator | 2026-02-04 01:19:33.720668 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-04 01:19:33.720675 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-04 01:19:33.720682 | orchestrator | + description = "vrrp" 2026-02-04 01:19:33.720688 | orchestrator | + direction = "ingress" 2026-02-04 01:19:33.720694 | orchestrator | + ethertype = "IPv4" 2026-02-04 01:19:33.720698 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.720702 | orchestrator | + protocol = "112" 2026-02-04 01:19:33.720706 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.720710 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-04 01:19:33.720713 | orchestrator | + remote_group_id = (known after apply) 2026-02-04 01:19:33.720717 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-04 01:19:33.720721 | orchestrator | + security_group_id = (known after apply) 2026-02-04 01:19:33.720725 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.720729 | orchestrator | } 2026-02-04 01:19:33.720791 | orchestrator | 2026-02-04 01:19:33.720823 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-04 01:19:33.720830 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-04 01:19:33.720836 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.720842 | orchestrator | + description = "management security group" 2026-02-04 01:19:33.720845 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.720849 | orchestrator | + name = "testbed-management" 2026-02-04 01:19:33.720853 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.720858 | orchestrator | + stateful = (known after apply) 2026-02-04 01:19:33.720864 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.720870 | orchestrator | } 2026-02-04 01:19:33.720943 | orchestrator | 2026-02-04 01:19:33.720958 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-04 01:19:33.720966 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-04 01:19:33.720973 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.720981 | orchestrator | + description = "node security group" 2026-02-04 01:19:33.720988 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.720995 | orchestrator | + name = "testbed-node" 2026-02-04 01:19:33.721012 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.721016 | orchestrator | + stateful = (known after apply) 2026-02-04 01:19:33.721020 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.721024 | orchestrator | } 2026-02-04 01:19:33.721165 | orchestrator | 2026-02-04 01:19:33.721180 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-04 01:19:33.721185 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-04 01:19:33.721188 | orchestrator | + all_tags = (known after apply) 2026-02-04 01:19:33.721192 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-04 01:19:33.721196 | orchestrator | + dns_nameservers = [ 2026-02-04 01:19:33.721200 | orchestrator | + "8.8.8.8", 2026-02-04 01:19:33.721211 | orchestrator | + "9.9.9.9", 2026-02-04 01:19:33.721215 | orchestrator | ] 2026-02-04 01:19:33.721219 | orchestrator | + enable_dhcp = true 2026-02-04 01:19:33.721223 | orchestrator | + gateway_ip = (known after apply) 2026-02-04 01:19:33.721230 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.721234 | orchestrator | + ip_version = 4 2026-02-04 01:19:33.721238 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-04 01:19:33.721242 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-04 01:19:33.721245 | orchestrator | + name = "subnet-testbed-management" 2026-02-04 01:19:33.721249 | orchestrator | + network_id = (known after apply) 2026-02-04 01:19:33.721253 | orchestrator | + no_gateway = false 2026-02-04 01:19:33.721257 | orchestrator | + region = (known after apply) 2026-02-04 01:19:33.721261 | orchestrator | + service_types = (known after apply) 2026-02-04 01:19:33.721269 | orchestrator | + tenant_id = (known after apply) 2026-02-04 01:19:33.721273 | orchestrator | 2026-02-04 01:19:33.721277 | orchestrator | + allocation_pool { 2026-02-04 01:19:33.721281 | orchestrator | + end = "192.168.31.250" 2026-02-04 01:19:33.721284 | orchestrator | + start = "192.168.31.200" 2026-02-04 01:19:33.721288 | orchestrator | } 2026-02-04 01:19:33.721292 | orchestrator | } 2026-02-04 01:19:33.721322 | orchestrator | 2026-02-04 01:19:33.721337 | orchestrator | # terraform_data.image will be created 2026-02-04 01:19:33.721345 | orchestrator | + resource "terraform_data" "image" { 2026-02-04 01:19:33.721352 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.721359 | orchestrator | + input = "Ubuntu 24.04" 2026-02-04 01:19:33.721366 | orchestrator | + output = (known after apply) 2026-02-04 01:19:33.721373 | orchestrator | } 2026-02-04 01:19:33.721414 | orchestrator | 2026-02-04 01:19:33.721426 | orchestrator | # terraform_data.image_node will be created 2026-02-04 01:19:33.721430 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-04 01:19:33.721434 | orchestrator | + id = (known after apply) 2026-02-04 01:19:33.721438 | orchestrator | + input = "Ubuntu 24.04" 2026-02-04 01:19:33.721442 | orchestrator | + output = (known after apply) 2026-02-04 01:19:33.721447 | orchestrator | } 2026-02-04 01:19:33.721468 | orchestrator | 2026-02-04 01:19:33.721476 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-04 01:19:33.721492 | orchestrator | 2026-02-04 01:19:33.721497 | orchestrator | Changes to Outputs: 2026-02-04 01:19:33.721507 | orchestrator | + manager_address = (sensitive value) 2026-02-04 01:19:33.721512 | orchestrator | + private_key = (sensitive value) 2026-02-04 01:19:33.931821 | orchestrator | terraform_data.image: Creating... 2026-02-04 01:19:33.932164 | orchestrator | terraform_data.image: Creation complete after 0s [id=3cb4b092-175d-9239-5f2b-ee4d98be0d96] 2026-02-04 01:19:33.933041 | orchestrator | terraform_data.image_node: Creating... 2026-02-04 01:19:33.933475 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=77b9842b-6d3a-1ec4-9c17-543fefac3b2f] 2026-02-04 01:19:33.943096 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-04 01:19:33.950371 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-04 01:19:33.951062 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-04 01:19:33.955178 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-04 01:19:33.955272 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-04 01:19:33.957497 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-04 01:19:33.957525 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-04 01:19:33.958643 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-04 01:19:33.964340 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-04 01:19:33.966339 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-04 01:19:34.410646 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2026-02-04 01:19:34.414601 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-04 01:19:34.445687 | orchestrator | data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-04 01:19:34.449994 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-04 01:19:34.452838 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-04 01:19:34.455747 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-04 01:19:34.995929 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=a588c6fa-af78-4097-bf50-5e1636ce651d] 2026-02-04 01:19:35.011445 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-04 01:19:35.018101 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=b3fff64b14d6f7b7600e306d23c7bfc86bb73f5f] 2026-02-04 01:19:35.027611 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-04 01:19:35.035881 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=04acd518cae55e5ae3baca1e096d0e5ad0a61106] 2026-02-04 01:19:35.042314 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-04 01:19:37.549996 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=6d2cd144-5f23-453e-8510-b2ac8c490536] 2026-02-04 01:19:37.555531 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=3eb80431-163e-49f3-a2bf-dfaced367a52] 2026-02-04 01:19:37.558174 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-04 01:19:37.562176 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-04 01:19:37.568964 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=b5de00e7-ee07-4e3d-81c3-372cd77c193b] 2026-02-04 01:19:37.581691 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-04 01:19:37.592961 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=9e979b3a-dcfc-4e73-af9b-91d41771b388] 2026-02-04 01:19:37.595901 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=e0d2f838-a19f-44eb-bcbc-1b531e772c23] 2026-02-04 01:19:37.601478 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-04 01:19:37.601656 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-04 01:19:37.619951 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=859f82ae-faba-4c56-a83f-b08f511c4f40] 2026-02-04 01:19:37.631152 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-04 01:19:37.668343 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=10db325f-6922-4f85-a906-c9ac62af1811] 2026-02-04 01:19:37.671330 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=aa7bd7a5-43b2-4e34-8a80-8d27fcf27675] 2026-02-04 01:19:37.675074 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-04 01:19:37.692440 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=87322fe2-f6c0-4479-8323-00ed6f38f0dd] 2026-02-04 01:19:38.376637 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=e0e69a1b-49e5-4bfa-8d89-c757420b8cc5] 2026-02-04 01:19:38.494067 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 0s [id=6ab20011-0cc3-442f-974f-478953a4e4a7] 2026-02-04 01:19:38.499362 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-04 01:19:40.939864 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=d5a1c69a-e203-43f1-92a7-d53a24ddc92f] 2026-02-04 01:19:40.966846 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=e5ab81eb-29ae-4e69-b67a-37e5644be861] 2026-02-04 01:19:40.989758 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=5c0a15c2-b328-40df-8b11-eca46f34c8bf] 2026-02-04 01:19:40.997879 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=50d185a4-af79-48b0-8c50-ba5ba990d99d] 2026-02-04 01:19:41.027363 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=853c0bfc-16cc-413e-b766-6ae1ea37d859] 2026-02-04 01:19:41.069839 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=cdb44653-92bc-471c-ab02-c768f71f0118] 2026-02-04 01:19:41.598035 | orchestrator | openstack_networking_router_v2.router: Creation complete after 4s [id=d5faa2b2-66b6-4ee5-bdb8-fc91f7453c3a] 2026-02-04 01:19:41.603216 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-04 01:19:41.603328 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-04 01:19:41.603973 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-04 01:19:41.789408 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=9d0b7abb-ea7a-4e93-922d-b8340f8cf57d] 2026-02-04 01:19:41.799780 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-04 01:19:41.800640 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-04 01:19:41.801305 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=b4dda80e-ebba-49f3-a605-f45ed1d4b0c8] 2026-02-04 01:19:41.801436 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-04 01:19:41.803879 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-04 01:19:41.810694 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-04 01:19:41.812876 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-04 01:19:41.812942 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-04 01:19:41.816717 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-04 01:19:41.817169 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-04 01:19:41.953490 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=cccf53fa-ea99-4eb7-b27a-e739631ea9e4] 2026-02-04 01:19:41.960999 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-04 01:19:42.038494 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=f4ee3831-c0dc-40e1-a0e2-3733a12b6e28] 2026-02-04 01:19:42.047040 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-04 01:19:42.353503 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=aeadc64b-2693-4d26-8746-47d57a18340f] 2026-02-04 01:19:42.365277 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-04 01:19:42.454445 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=a99df337-cdf5-4ccb-a410-87f4abcc1af6] 2026-02-04 01:19:42.463343 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-04 01:19:42.508315 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=86ec0049-5f61-4e8d-b4a4-6c4f98279f40] 2026-02-04 01:19:42.512360 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-04 01:19:42.514935 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=7554cec9-8f34-47a0-a0e9-8b5c7b92bd92] 2026-02-04 01:19:42.518194 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-04 01:19:42.534120 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=ff4eff5e-b8af-4ee9-87ea-976005de8aa9] 2026-02-04 01:19:42.538581 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-04 01:19:42.658960 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=9435f5ea-08e5-485e-88e2-520ac9468470] 2026-02-04 01:19:43.540228 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=b24c5627-b455-4ffa-84e3-0c182ea7d860] 2026-02-04 01:19:43.540306 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=6fca9e02-0a62-49f1-902a-4650274c2d8b] 2026-02-04 01:19:43.540324 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=976c83da-fe1c-4bbf-bdb0-689b62ba0b85] 2026-02-04 01:19:43.540329 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=212c4355-c0b5-4abb-9a27-300fa0933faf] 2026-02-04 01:19:43.540334 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=a74dcf99-b2f8-40aa-b504-cef8bf234061] 2026-02-04 01:19:43.540339 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=a9ba93ca-74a8-40f8-825a-e65c96543f4d] 2026-02-04 01:19:43.540344 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=ea906cdb-778a-48a2-81e4-2f984092f137] 2026-02-04 01:19:43.540349 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=af2b6ca2-350e-4d82-9499-83c706530046] 2026-02-04 01:19:45.813873 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 4s [id=93ecea0f-e1dc-4cf2-9ad1-ad0af130f979] 2026-02-04 01:19:45.828020 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-04 01:19:45.845622 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-04 01:19:45.845971 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-04 01:19:45.846772 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-04 01:19:45.846982 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-04 01:19:45.853473 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-04 01:19:45.861096 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-04 01:19:47.493649 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=73786d89-d058-4bf6-9dc7-3a1ed240152b] 2026-02-04 01:19:47.500012 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-04 01:19:47.504162 | orchestrator | local_file.inventory: Creating... 2026-02-04 01:19:47.507807 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-04 01:19:47.507871 | orchestrator | local_file.inventory: Creation complete after 0s [id=a6cc5fa739c63fa084c74d51c0d00a7fb3eef83e] 2026-02-04 01:19:47.512558 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=8b10bab329e7687faa6afac2a0fd4ced05001755] 2026-02-04 01:19:48.216519 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=73786d89-d058-4bf6-9dc7-3a1ed240152b] 2026-02-04 01:19:55.848332 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-04 01:19:55.848479 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-04 01:19:55.849426 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-04 01:19:55.849552 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-04 01:19:55.856897 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-04 01:19:55.862456 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-04 01:20:05.858090 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-04 01:20:05.858186 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-04 01:20:05.858843 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-04 01:20:05.858858 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-04 01:20:05.858868 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-04 01:20:05.863517 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-04 01:20:06.363441 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=7404f92d-b15f-4eba-8b20-98737b15b769] 2026-02-04 01:20:06.404071 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=55ee70a7-bce6-4823-98c7-88846e9247ca] 2026-02-04 01:20:06.689889 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 21s [id=579ec4e3-8a33-42f0-9ea9-55d3e4f304bf] 2026-02-04 01:20:06.723033 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=8fbe1acb-dce5-412e-a9f6-27fa9575e728] 2026-02-04 01:20:15.866598 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-04 01:20:15.866701 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-04 01:20:16.580644 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=c31faf33-212b-4ce5-91e8-f99213e05be7] 2026-02-04 01:20:16.719356 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=5ffd37dc-e4c5-4b5f-b4a0-d67c3a0ffa81] 2026-02-04 01:20:16.733444 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-04 01:20:16.736702 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=3396694835660468660] 2026-02-04 01:20:16.742090 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-04 01:20:16.748833 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-04 01:20:16.749161 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-04 01:20:16.752683 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-04 01:20:16.757682 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-04 01:20:16.770561 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-04 01:20:16.774573 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-04 01:20:16.778076 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-04 01:20:16.782133 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-04 01:20:16.782335 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-04 01:20:20.126834 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=5ffd37dc-e4c5-4b5f-b4a0-d67c3a0ffa81/e0d2f838-a19f-44eb-bcbc-1b531e772c23] 2026-02-04 01:20:20.126912 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=579ec4e3-8a33-42f0-9ea9-55d3e4f304bf/10db325f-6922-4f85-a906-c9ac62af1811] 2026-02-04 01:20:20.147230 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=55ee70a7-bce6-4823-98c7-88846e9247ca/3eb80431-163e-49f3-a2bf-dfaced367a52] 2026-02-04 01:20:20.150267 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=579ec4e3-8a33-42f0-9ea9-55d3e4f304bf/859f82ae-faba-4c56-a83f-b08f511c4f40] 2026-02-04 01:20:20.174826 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=5ffd37dc-e4c5-4b5f-b4a0-d67c3a0ffa81/87322fe2-f6c0-4479-8323-00ed6f38f0dd] 2026-02-04 01:20:20.177681 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=55ee70a7-bce6-4823-98c7-88846e9247ca/aa7bd7a5-43b2-4e34-8a80-8d27fcf27675] 2026-02-04 01:20:26.257560 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=5ffd37dc-e4c5-4b5f-b4a0-d67c3a0ffa81/6d2cd144-5f23-453e-8510-b2ac8c490536] 2026-02-04 01:20:26.258318 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=579ec4e3-8a33-42f0-9ea9-55d3e4f304bf/9e979b3a-dcfc-4e73-af9b-91d41771b388] 2026-02-04 01:20:26.286401 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=55ee70a7-bce6-4823-98c7-88846e9247ca/b5de00e7-ee07-4e3d-81c3-372cd77c193b] 2026-02-04 01:20:26.782912 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-04 01:20:36.784265 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-04 01:20:37.172510 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=224259ce-b6bf-438b-92d2-7521fe274943] 2026-02-04 01:20:37.190129 | orchestrator | 2026-02-04 01:20:37.190212 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-04 01:20:37.190222 | orchestrator | 2026-02-04 01:20:37.190229 | orchestrator | Outputs: 2026-02-04 01:20:37.190236 | orchestrator | 2026-02-04 01:20:37.190242 | orchestrator | manager_address = 2026-02-04 01:20:37.190248 | orchestrator | private_key = 2026-02-04 01:20:37.582383 | orchestrator | ok: Runtime: 0:01:08.793364 2026-02-04 01:20:37.623342 | 2026-02-04 01:20:37.623504 | TASK [Fetch manager address] 2026-02-04 01:20:38.120461 | orchestrator | ok 2026-02-04 01:20:38.130548 | 2026-02-04 01:20:38.130675 | TASK [Set manager_host address] 2026-02-04 01:20:38.212855 | orchestrator | ok 2026-02-04 01:20:38.226971 | 2026-02-04 01:20:38.227140 | LOOP [Update ansible collections] 2026-02-04 01:20:39.805791 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-04 01:20:39.806226 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 01:20:39.806294 | orchestrator | Starting galaxy collection install process 2026-02-04 01:20:39.806339 | orchestrator | Process install dependency map 2026-02-04 01:20:39.806379 | orchestrator | Starting collection install process 2026-02-04 01:20:39.806413 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-04 01:20:39.806452 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-04 01:20:39.806493 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-04 01:20:39.806561 | orchestrator | ok: Item: commons Runtime: 0:00:01.174245 2026-02-04 01:20:40.945217 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 01:20:40.945405 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-04 01:20:40.945468 | orchestrator | Starting galaxy collection install process 2026-02-04 01:20:40.945518 | orchestrator | Process install dependency map 2026-02-04 01:20:40.945582 | orchestrator | Starting collection install process 2026-02-04 01:20:40.945628 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-04 01:20:40.945671 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-04 01:20:40.945711 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-04 01:20:40.945777 | orchestrator | ok: Item: services Runtime: 0:00:00.850390 2026-02-04 01:20:40.966781 | 2026-02-04 01:20:40.966990 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-04 01:20:51.514815 | orchestrator | ok 2026-02-04 01:20:51.525650 | 2026-02-04 01:20:51.525770 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-04 01:21:51.577445 | orchestrator | ok 2026-02-04 01:21:51.588598 | 2026-02-04 01:21:51.588722 | TASK [Fetch manager ssh hostkey] 2026-02-04 01:21:53.160478 | orchestrator | Output suppressed because no_log was given 2026-02-04 01:21:53.175313 | 2026-02-04 01:21:53.175469 | TASK [Get ssh keypair from terraform environment] 2026-02-04 01:21:53.710319 | orchestrator | ok: Runtime: 0:00:00.008025 2026-02-04 01:21:53.728688 | 2026-02-04 01:21:53.728859 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-04 01:21:53.765593 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-04 01:21:53.774865 | 2026-02-04 01:21:53.774984 | TASK [Run manager part 0] 2026-02-04 01:21:55.152851 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 01:21:55.386630 | orchestrator | 2026-02-04 01:21:55.386735 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-04 01:21:55.386745 | orchestrator | 2026-02-04 01:21:55.386758 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-04 01:21:57.386136 | orchestrator | ok: [testbed-manager] 2026-02-04 01:21:57.386195 | orchestrator | 2026-02-04 01:21:57.386218 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-04 01:21:57.386228 | orchestrator | 2026-02-04 01:21:57.386237 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:21:59.407770 | orchestrator | ok: [testbed-manager] 2026-02-04 01:21:59.407897 | orchestrator | 2026-02-04 01:21:59.407911 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-04 01:22:00.085164 | orchestrator | ok: [testbed-manager] 2026-02-04 01:22:00.085228 | orchestrator | 2026-02-04 01:22:00.085237 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-04 01:22:00.133116 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:22:00.133185 | orchestrator | 2026-02-04 01:22:00.133196 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-04 01:22:00.162264 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:22:00.162325 | orchestrator | 2026-02-04 01:22:00.162335 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-04 01:22:00.190996 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:22:00.191049 | orchestrator | 2026-02-04 01:22:00.191056 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-04 01:22:00.218160 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:22:00.218212 | orchestrator | 2026-02-04 01:22:00.218219 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-04 01:22:00.248949 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:22:00.249020 | orchestrator | 2026-02-04 01:22:00.249030 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-04 01:22:00.284210 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:22:00.284295 | orchestrator | 2026-02-04 01:22:00.284308 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-04 01:22:00.319676 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:22:00.319765 | orchestrator | 2026-02-04 01:22:00.319775 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-04 01:22:01.042952 | orchestrator | changed: [testbed-manager] 2026-02-04 01:22:01.043007 | orchestrator | 2026-02-04 01:22:01.043017 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-04 01:24:59.114295 | orchestrator | changed: [testbed-manager] 2026-02-04 01:24:59.114396 | orchestrator | 2026-02-04 01:24:59.114415 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-04 01:26:20.329067 | orchestrator | changed: [testbed-manager] 2026-02-04 01:26:20.329165 | orchestrator | 2026-02-04 01:26:20.329179 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-04 01:26:43.802372 | orchestrator | changed: [testbed-manager] 2026-02-04 01:26:43.802487 | orchestrator | 2026-02-04 01:26:43.802502 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-04 01:26:54.006771 | orchestrator | changed: [testbed-manager] 2026-02-04 01:26:54.006876 | orchestrator | 2026-02-04 01:26:54.006891 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-04 01:26:54.047120 | orchestrator | ok: [testbed-manager] 2026-02-04 01:26:54.047205 | orchestrator | 2026-02-04 01:26:54.047222 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-04 01:26:54.930832 | orchestrator | ok: [testbed-manager] 2026-02-04 01:26:54.930932 | orchestrator | 2026-02-04 01:26:54.930951 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-04 01:26:55.722156 | orchestrator | changed: [testbed-manager] 2026-02-04 01:26:55.722306 | orchestrator | 2026-02-04 01:26:55.722334 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-04 01:27:02.649548 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:02.649654 | orchestrator | 2026-02-04 01:27:02.649696 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-04 01:27:09.076207 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:09.076332 | orchestrator | 2026-02-04 01:27:09.076361 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-04 01:27:11.995872 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:11.995982 | orchestrator | 2026-02-04 01:27:11.996014 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-04 01:27:13.930985 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:13.931031 | orchestrator | 2026-02-04 01:27:13.931040 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-04 01:27:15.072042 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-04 01:27:15.072104 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-04 01:27:15.072115 | orchestrator | 2026-02-04 01:27:15.072124 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-04 01:27:15.112874 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-04 01:27:15.112935 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-04 01:27:15.112946 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-04 01:27:15.112955 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-04 01:27:20.833813 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-04 01:27:20.833923 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-04 01:27:20.833941 | orchestrator | 2026-02-04 01:27:20.833954 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-04 01:27:21.472890 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:21.472990 | orchestrator | 2026-02-04 01:27:21.473007 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-04 01:27:41.473930 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-04 01:27:41.474110 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-04 01:27:41.474147 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-04 01:27:41.474168 | orchestrator | 2026-02-04 01:27:41.474189 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-04 01:27:43.977048 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-04 01:27:43.977149 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-04 01:27:43.977166 | orchestrator | 2026-02-04 01:27:43.977197 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-04 01:27:43.977221 | orchestrator | 2026-02-04 01:27:43.977233 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:27:45.487012 | orchestrator | ok: [testbed-manager] 2026-02-04 01:27:45.487126 | orchestrator | 2026-02-04 01:27:45.487144 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-04 01:27:45.527807 | orchestrator | ok: [testbed-manager] 2026-02-04 01:27:45.527888 | orchestrator | 2026-02-04 01:27:45.527899 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-04 01:27:45.593614 | orchestrator | ok: [testbed-manager] 2026-02-04 01:27:45.593687 | orchestrator | 2026-02-04 01:27:45.593695 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-04 01:27:46.401936 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:46.402098 | orchestrator | 2026-02-04 01:27:46.402131 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-04 01:27:47.141772 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:47.141874 | orchestrator | 2026-02-04 01:27:47.141890 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-04 01:27:48.480977 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-04 01:27:48.481032 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-04 01:27:48.481043 | orchestrator | 2026-02-04 01:27:48.481058 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-04 01:27:49.832322 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:49.832375 | orchestrator | 2026-02-04 01:27:49.832382 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-04 01:27:51.624078 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 01:27:51.624767 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-04 01:27:51.624804 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-04 01:27:51.624817 | orchestrator | 2026-02-04 01:27:51.624831 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-04 01:27:51.688977 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:27:51.689091 | orchestrator | 2026-02-04 01:27:51.689115 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-04 01:27:51.765191 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:27:51.765288 | orchestrator | 2026-02-04 01:27:51.765307 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-04 01:27:52.324529 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:52.324628 | orchestrator | 2026-02-04 01:27:52.324645 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-04 01:27:52.402744 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:27:52.402856 | orchestrator | 2026-02-04 01:27:52.402879 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-04 01:27:53.305954 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 01:27:53.306097 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:53.306116 | orchestrator | 2026-02-04 01:27:53.306129 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-04 01:27:53.341388 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:27:53.341485 | orchestrator | 2026-02-04 01:27:53.341497 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-04 01:27:53.380037 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:27:53.380107 | orchestrator | 2026-02-04 01:27:53.380117 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-04 01:27:53.409772 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:27:53.409835 | orchestrator | 2026-02-04 01:27:53.409854 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-04 01:27:53.486371 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:27:53.486505 | orchestrator | 2026-02-04 01:27:53.486531 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-04 01:27:54.186974 | orchestrator | ok: [testbed-manager] 2026-02-04 01:27:54.187037 | orchestrator | 2026-02-04 01:27:54.187044 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-04 01:27:54.187050 | orchestrator | 2026-02-04 01:27:54.187054 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:27:55.624197 | orchestrator | ok: [testbed-manager] 2026-02-04 01:27:55.624312 | orchestrator | 2026-02-04 01:27:55.624337 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-04 01:27:56.604465 | orchestrator | changed: [testbed-manager] 2026-02-04 01:27:56.604513 | orchestrator | 2026-02-04 01:27:56.604521 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:27:56.604529 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-04 01:27:56.604537 | orchestrator | 2026-02-04 01:27:57.051888 | orchestrator | ok: Runtime: 0:06:02.616802 2026-02-04 01:27:57.068485 | 2026-02-04 01:27:57.068636 | TASK [Point out that the log in on the manager is now possible] 2026-02-04 01:27:57.109322 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-04 01:27:57.117425 | 2026-02-04 01:27:57.117532 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-04 01:27:57.157453 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-04 01:27:57.164856 | 2026-02-04 01:27:57.164967 | TASK [Run manager part 1 + 2] 2026-02-04 01:27:58.088575 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-04 01:27:58.158332 | orchestrator | 2026-02-04 01:27:58.158383 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-04 01:27:58.158390 | orchestrator | 2026-02-04 01:27:58.158403 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:28:00.669829 | orchestrator | ok: [testbed-manager] 2026-02-04 01:28:00.669905 | orchestrator | 2026-02-04 01:28:00.669929 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-04 01:28:00.710532 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:28:00.710590 | orchestrator | 2026-02-04 01:28:00.710602 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-04 01:28:00.750873 | orchestrator | ok: [testbed-manager] 2026-02-04 01:28:00.750930 | orchestrator | 2026-02-04 01:28:00.750942 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 01:28:00.793593 | orchestrator | ok: [testbed-manager] 2026-02-04 01:28:00.793654 | orchestrator | 2026-02-04 01:28:00.793666 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 01:28:00.870250 | orchestrator | ok: [testbed-manager] 2026-02-04 01:28:00.870309 | orchestrator | 2026-02-04 01:28:00.870320 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 01:28:00.943081 | orchestrator | ok: [testbed-manager] 2026-02-04 01:28:00.943142 | orchestrator | 2026-02-04 01:28:00.943154 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 01:28:00.997255 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-04 01:28:00.997306 | orchestrator | 2026-02-04 01:28:00.997313 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 01:28:01.756332 | orchestrator | ok: [testbed-manager] 2026-02-04 01:28:01.756506 | orchestrator | 2026-02-04 01:28:01.756520 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 01:28:01.813852 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:28:01.813911 | orchestrator | 2026-02-04 01:28:01.813919 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 01:28:03.233848 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:03.233902 | orchestrator | 2026-02-04 01:28:03.233911 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 01:28:03.823609 | orchestrator | ok: [testbed-manager] 2026-02-04 01:28:03.823680 | orchestrator | 2026-02-04 01:28:03.823689 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 01:28:05.034621 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:05.034670 | orchestrator | 2026-02-04 01:28:05.034682 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 01:28:20.823859 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:20.823993 | orchestrator | 2026-02-04 01:28:20.824022 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-04 01:28:21.524801 | orchestrator | ok: [testbed-manager] 2026-02-04 01:28:21.524910 | orchestrator | 2026-02-04 01:28:21.524931 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-04 01:28:21.581781 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:28:21.581880 | orchestrator | 2026-02-04 01:28:21.581897 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-04 01:28:22.507951 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:22.508067 | orchestrator | 2026-02-04 01:28:22.508092 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-04 01:28:23.460173 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:23.460260 | orchestrator | 2026-02-04 01:28:23.460277 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-04 01:28:24.011504 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:24.011595 | orchestrator | 2026-02-04 01:28:24.011611 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-04 01:28:24.049489 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-04 01:28:24.049599 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-04 01:28:24.049609 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-04 01:28:24.049616 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-04 01:28:26.697638 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:26.697705 | orchestrator | 2026-02-04 01:28:26.697712 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-04 01:28:36.299550 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-04 01:28:36.299652 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-04 01:28:36.299669 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-04 01:28:36.299681 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-04 01:28:36.299701 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-04 01:28:36.299711 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-04 01:28:36.299721 | orchestrator | 2026-02-04 01:28:36.299733 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-04 01:28:37.433253 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:37.433355 | orchestrator | 2026-02-04 01:28:37.433371 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-04 01:28:37.471742 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:28:37.471849 | orchestrator | 2026-02-04 01:28:37.471874 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-04 01:28:40.778010 | orchestrator | changed: [testbed-manager] 2026-02-04 01:28:40.778184 | orchestrator | 2026-02-04 01:28:40.778213 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-04 01:28:40.819410 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:28:40.819482 | orchestrator | 2026-02-04 01:28:40.819493 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-04 01:30:31.519502 | orchestrator | changed: [testbed-manager] 2026-02-04 01:30:31.519604 | orchestrator | 2026-02-04 01:30:31.519621 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 01:30:32.831576 | orchestrator | ok: [testbed-manager] 2026-02-04 01:30:32.831652 | orchestrator | 2026-02-04 01:30:32.831666 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:30:32.831677 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-04 01:30:32.831686 | orchestrator | 2026-02-04 01:30:33.305530 | orchestrator | ok: Runtime: 0:02:35.508994 2026-02-04 01:30:33.318045 | 2026-02-04 01:30:33.318209 | TASK [Reboot manager] 2026-02-04 01:30:34.858206 | orchestrator | ok: Runtime: 0:00:01.005374 2026-02-04 01:30:34.874327 | 2026-02-04 01:30:34.874500 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-04 01:30:51.327183 | orchestrator | ok 2026-02-04 01:30:51.336698 | 2026-02-04 01:30:51.336811 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-04 01:31:51.380693 | orchestrator | ok 2026-02-04 01:31:51.389104 | 2026-02-04 01:31:51.389220 | TASK [Deploy manager + bootstrap nodes] 2026-02-04 01:31:54.250235 | orchestrator | 2026-02-04 01:31:54.250486 | orchestrator | # DEPLOY MANAGER 2026-02-04 01:31:54.250524 | orchestrator | 2026-02-04 01:31:54.250549 | orchestrator | + set -e 2026-02-04 01:31:54.250572 | orchestrator | + echo 2026-02-04 01:31:54.250595 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-04 01:31:54.250624 | orchestrator | + echo 2026-02-04 01:31:54.250684 | orchestrator | + cat /opt/manager-vars.sh 2026-02-04 01:31:54.253709 | orchestrator | export NUMBER_OF_NODES=6 2026-02-04 01:31:54.253798 | orchestrator | 2026-02-04 01:31:54.253821 | orchestrator | export CEPH_VERSION=reef 2026-02-04 01:31:54.253843 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-04 01:31:54.253865 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-04 01:31:54.253906 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-04 01:31:54.253927 | orchestrator | 2026-02-04 01:31:54.253947 | orchestrator | export ARA=false 2026-02-04 01:31:54.253959 | orchestrator | export DEPLOY_MODE=manager 2026-02-04 01:31:54.253977 | orchestrator | export TEMPEST=false 2026-02-04 01:31:54.253989 | orchestrator | export IS_ZUUL=true 2026-02-04 01:31:54.254000 | orchestrator | 2026-02-04 01:31:54.254061 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:31:54.254078 | orchestrator | export EXTERNAL_API=false 2026-02-04 01:31:54.254089 | orchestrator | 2026-02-04 01:31:54.254100 | orchestrator | export IMAGE_USER=ubuntu 2026-02-04 01:31:54.254116 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-04 01:31:54.254127 | orchestrator | 2026-02-04 01:31:54.254138 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-04 01:31:54.254164 | orchestrator | 2026-02-04 01:31:54.254176 | orchestrator | + echo 2026-02-04 01:31:54.254190 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 01:31:54.255080 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 01:31:54.255141 | orchestrator | ++ INTERACTIVE=false 2026-02-04 01:31:54.255165 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 01:31:54.255188 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 01:31:54.255415 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 01:31:54.255444 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 01:31:54.255466 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 01:31:54.255486 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 01:31:54.255506 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 01:31:54.255527 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 01:31:54.255548 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 01:31:54.255566 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 01:31:54.255585 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 01:31:54.255603 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 01:31:54.255640 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 01:31:54.255659 | orchestrator | ++ export ARA=false 2026-02-04 01:31:54.255678 | orchestrator | ++ ARA=false 2026-02-04 01:31:54.255697 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 01:31:54.255716 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 01:31:54.255735 | orchestrator | ++ export TEMPEST=false 2026-02-04 01:31:54.255754 | orchestrator | ++ TEMPEST=false 2026-02-04 01:31:54.255781 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 01:31:54.255799 | orchestrator | ++ IS_ZUUL=true 2026-02-04 01:31:54.255817 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:31:54.255835 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:31:54.255853 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 01:31:54.255872 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 01:31:54.255890 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 01:31:54.255909 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 01:31:54.255930 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 01:31:54.255949 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 01:31:54.255969 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 01:31:54.255987 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 01:31:54.256006 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-04 01:31:54.326618 | orchestrator | + docker version 2026-02-04 01:31:54.627175 | orchestrator | Client: Docker Engine - Community 2026-02-04 01:31:54.627310 | orchestrator | Version: 27.5.1 2026-02-04 01:31:54.627323 | orchestrator | API version: 1.47 2026-02-04 01:31:54.627329 | orchestrator | Go version: go1.22.11 2026-02-04 01:31:54.627335 | orchestrator | Git commit: 9f9e405 2026-02-04 01:31:54.627340 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-04 01:31:54.627347 | orchestrator | OS/Arch: linux/amd64 2026-02-04 01:31:54.627353 | orchestrator | Context: default 2026-02-04 01:31:54.627358 | orchestrator | 2026-02-04 01:31:54.627363 | orchestrator | Server: Docker Engine - Community 2026-02-04 01:31:54.627369 | orchestrator | Engine: 2026-02-04 01:31:54.627375 | orchestrator | Version: 27.5.1 2026-02-04 01:31:54.627381 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-04 01:31:54.627415 | orchestrator | Go version: go1.22.11 2026-02-04 01:31:54.627421 | orchestrator | Git commit: 4c9b3b0 2026-02-04 01:31:54.627426 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-04 01:31:54.627432 | orchestrator | OS/Arch: linux/amd64 2026-02-04 01:31:54.627437 | orchestrator | Experimental: false 2026-02-04 01:31:54.627444 | orchestrator | containerd: 2026-02-04 01:31:54.627454 | orchestrator | Version: v2.2.1 2026-02-04 01:31:54.627463 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-04 01:31:54.627472 | orchestrator | runc: 2026-02-04 01:31:54.627715 | orchestrator | Version: 1.3.4 2026-02-04 01:31:54.627786 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-04 01:31:54.627794 | orchestrator | docker-init: 2026-02-04 01:31:54.627799 | orchestrator | Version: 0.19.0 2026-02-04 01:31:54.627804 | orchestrator | GitCommit: de40ad0 2026-02-04 01:31:54.630767 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-04 01:31:54.641105 | orchestrator | + set -e 2026-02-04 01:31:54.642464 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 01:31:54.642509 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 01:31:54.642521 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 01:31:54.642531 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 01:31:54.642541 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 01:31:54.642551 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 01:31:54.642562 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 01:31:54.642572 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 01:31:54.642582 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 01:31:54.642593 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 01:31:54.642603 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 01:31:54.642613 | orchestrator | ++ export ARA=false 2026-02-04 01:31:54.642624 | orchestrator | ++ ARA=false 2026-02-04 01:31:54.642634 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 01:31:54.642644 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 01:31:54.642653 | orchestrator | ++ export TEMPEST=false 2026-02-04 01:31:54.642663 | orchestrator | ++ TEMPEST=false 2026-02-04 01:31:54.642673 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 01:31:54.642682 | orchestrator | ++ IS_ZUUL=true 2026-02-04 01:31:54.642692 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:31:54.642702 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:31:54.642716 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 01:31:54.642734 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 01:31:54.642750 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 01:31:54.642766 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 01:31:54.642784 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 01:31:54.642802 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 01:31:54.642820 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 01:31:54.642833 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 01:31:54.642843 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 01:31:54.642853 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 01:31:54.642863 | orchestrator | ++ INTERACTIVE=false 2026-02-04 01:31:54.642873 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 01:31:54.642888 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 01:31:54.642898 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-04 01:31:54.642908 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-04 01:31:54.649372 | orchestrator | + set -e 2026-02-04 01:31:54.649455 | orchestrator | + VERSION=9.5.0 2026-02-04 01:31:54.649470 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-04 01:31:54.656691 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-04 01:31:54.656801 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-04 01:31:54.661186 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-04 01:31:54.665748 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-04 01:31:54.674625 | orchestrator | + set -e 2026-02-04 01:31:54.674709 | orchestrator | /opt/configuration ~ 2026-02-04 01:31:54.674723 | orchestrator | + pushd /opt/configuration 2026-02-04 01:31:54.674735 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 01:31:54.676675 | orchestrator | + source /opt/venv/bin/activate 2026-02-04 01:31:54.678072 | orchestrator | ++ deactivate nondestructive 2026-02-04 01:31:54.678120 | orchestrator | ++ '[' -n '' ']' 2026-02-04 01:31:54.678141 | orchestrator | ++ '[' -n '' ']' 2026-02-04 01:31:54.678187 | orchestrator | ++ hash -r 2026-02-04 01:31:54.678204 | orchestrator | ++ '[' -n '' ']' 2026-02-04 01:31:54.678219 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-04 01:31:54.678234 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-04 01:31:54.678282 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-04 01:31:54.678300 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-04 01:31:54.678312 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-04 01:31:54.678327 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-04 01:31:54.678349 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-04 01:31:54.678367 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 01:31:54.678382 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 01:31:54.678397 | orchestrator | ++ export PATH 2026-02-04 01:31:54.678411 | orchestrator | ++ '[' -n '' ']' 2026-02-04 01:31:54.678425 | orchestrator | ++ '[' -z '' ']' 2026-02-04 01:31:54.678440 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-04 01:31:54.678455 | orchestrator | ++ PS1='(venv) ' 2026-02-04 01:31:54.678470 | orchestrator | ++ export PS1 2026-02-04 01:31:54.678484 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-04 01:31:54.678500 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-04 01:31:54.678515 | orchestrator | ++ hash -r 2026-02-04 01:31:54.678531 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-04 01:31:56.079232 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-04 01:31:56.080287 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-04 01:31:56.082149 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-04 01:31:56.083593 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-04 01:31:56.085234 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-04 01:31:56.096904 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-04 01:31:56.100056 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-04 01:31:56.100164 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-04 01:31:56.101836 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-04 01:31:56.144432 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-04 01:31:56.146926 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-04 01:31:56.149431 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-04 01:31:56.151177 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-04 01:31:56.155689 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-04 01:31:56.392732 | orchestrator | ++ which gilt 2026-02-04 01:31:56.397764 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-04 01:31:56.397869 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-04 01:31:56.702626 | orchestrator | osism.cfg-generics: 2026-02-04 01:31:56.894348 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-04 01:31:56.894460 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-04 01:31:56.895309 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-04 01:31:56.895332 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-04 01:31:57.717315 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-04 01:31:57.729047 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-04 01:31:58.088286 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-04 01:31:58.148910 | orchestrator | ~ 2026-02-04 01:31:58.148996 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 01:31:58.149004 | orchestrator | + deactivate 2026-02-04 01:31:58.149009 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-04 01:31:58.149016 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 01:31:58.149021 | orchestrator | + export PATH 2026-02-04 01:31:58.149025 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-04 01:31:58.149029 | orchestrator | + '[' -n '' ']' 2026-02-04 01:31:58.149036 | orchestrator | + hash -r 2026-02-04 01:31:58.149040 | orchestrator | + '[' -n '' ']' 2026-02-04 01:31:58.149044 | orchestrator | + unset VIRTUAL_ENV 2026-02-04 01:31:58.149049 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-04 01:31:58.149053 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-04 01:31:58.149057 | orchestrator | + unset -f deactivate 2026-02-04 01:31:58.149061 | orchestrator | + popd 2026-02-04 01:31:58.150537 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-04 01:31:58.150592 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-04 01:31:58.151005 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-04 01:31:58.199667 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 01:31:58.199754 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-04 01:31:58.200993 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-04 01:31:58.264662 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 01:31:58.264756 | orchestrator | ++ semver 2024.2 2025.1 2026-02-04 01:31:58.333416 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 01:31:58.333527 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-04 01:31:58.437904 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 01:31:58.438074 | orchestrator | + source /opt/venv/bin/activate 2026-02-04 01:31:58.438104 | orchestrator | ++ deactivate nondestructive 2026-02-04 01:31:58.438126 | orchestrator | ++ '[' -n '' ']' 2026-02-04 01:31:58.438146 | orchestrator | ++ '[' -n '' ']' 2026-02-04 01:31:58.438168 | orchestrator | ++ hash -r 2026-02-04 01:31:58.438189 | orchestrator | ++ '[' -n '' ']' 2026-02-04 01:31:58.438208 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-04 01:31:58.438227 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-04 01:31:58.438239 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-04 01:31:58.438307 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-04 01:31:58.438321 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-04 01:31:58.438333 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-04 01:31:58.438359 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-04 01:31:58.438372 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 01:31:58.438404 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 01:31:58.438417 | orchestrator | ++ export PATH 2026-02-04 01:31:58.438428 | orchestrator | ++ '[' -n '' ']' 2026-02-04 01:31:58.438440 | orchestrator | ++ '[' -z '' ']' 2026-02-04 01:31:58.438451 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-04 01:31:58.438462 | orchestrator | ++ PS1='(venv) ' 2026-02-04 01:31:58.438474 | orchestrator | ++ export PS1 2026-02-04 01:31:58.438485 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-04 01:31:58.438496 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-04 01:31:58.438508 | orchestrator | ++ hash -r 2026-02-04 01:31:58.438519 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-04 01:31:59.885035 | orchestrator | 2026-02-04 01:31:59.885118 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-04 01:31:59.885131 | orchestrator | 2026-02-04 01:31:59.885137 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 01:32:00.509561 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:00.509642 | orchestrator | 2026-02-04 01:32:00.509652 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-04 01:32:01.576363 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:01.576491 | orchestrator | 2026-02-04 01:32:01.576515 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-04 01:32:01.577478 | orchestrator | 2026-02-04 01:32:01.577558 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:32:04.229882 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:04.230002 | orchestrator | 2026-02-04 01:32:04.230086 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-04 01:32:04.288657 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:04.288801 | orchestrator | 2026-02-04 01:32:04.288831 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-04 01:32:04.813645 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:04.813751 | orchestrator | 2026-02-04 01:32:04.813768 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-04 01:32:04.862442 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:32:04.862548 | orchestrator | 2026-02-04 01:32:04.862566 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-04 01:32:05.222389 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:05.222523 | orchestrator | 2026-02-04 01:32:05.222542 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-04 01:32:05.572391 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:05.572481 | orchestrator | 2026-02-04 01:32:05.572493 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-04 01:32:05.714390 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:32:05.714493 | orchestrator | 2026-02-04 01:32:05.714510 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-04 01:32:05.714524 | orchestrator | 2026-02-04 01:32:05.714537 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:32:07.704818 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:07.704947 | orchestrator | 2026-02-04 01:32:07.704966 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-04 01:32:07.807182 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-04 01:32:07.807324 | orchestrator | 2026-02-04 01:32:07.807345 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-04 01:32:07.867741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-04 01:32:07.867841 | orchestrator | 2026-02-04 01:32:07.867856 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-04 01:32:09.034418 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-04 01:32:09.034521 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-04 01:32:09.034538 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-04 01:32:09.034551 | orchestrator | 2026-02-04 01:32:09.034565 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-04 01:32:11.014216 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-04 01:32:11.014361 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-04 01:32:11.014370 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-04 01:32:11.014376 | orchestrator | 2026-02-04 01:32:11.014381 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-04 01:32:11.692181 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 01:32:11.692311 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:11.692329 | orchestrator | 2026-02-04 01:32:11.692343 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-04 01:32:12.368727 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 01:32:12.368824 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:12.368835 | orchestrator | 2026-02-04 01:32:12.368843 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-04 01:32:12.430681 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:32:12.430758 | orchestrator | 2026-02-04 01:32:12.430769 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-04 01:32:12.838581 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:12.838714 | orchestrator | 2026-02-04 01:32:12.838738 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-04 01:32:12.921053 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-04 01:32:12.921149 | orchestrator | 2026-02-04 01:32:12.921164 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-04 01:32:14.118006 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:14.118198 | orchestrator | 2026-02-04 01:32:14.118215 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-04 01:32:15.014065 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:15.014154 | orchestrator | 2026-02-04 01:32:15.014164 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-04 01:32:37.294617 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:37.294741 | orchestrator | 2026-02-04 01:32:37.294757 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-04 01:32:37.349198 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:32:37.349324 | orchestrator | 2026-02-04 01:32:37.349361 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-04 01:32:37.349374 | orchestrator | 2026-02-04 01:32:37.349385 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:32:39.270411 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:39.270565 | orchestrator | 2026-02-04 01:32:39.270611 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-04 01:32:39.416215 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-04 01:32:39.416399 | orchestrator | 2026-02-04 01:32:39.416411 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-04 01:32:39.496041 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 01:32:39.496119 | orchestrator | 2026-02-04 01:32:39.496129 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-04 01:32:42.550105 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:42.550196 | orchestrator | 2026-02-04 01:32:42.550208 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-04 01:32:42.611657 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:42.611739 | orchestrator | 2026-02-04 01:32:42.611754 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-04 01:32:42.759921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-04 01:32:42.760052 | orchestrator | 2026-02-04 01:32:42.760075 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-04 01:32:45.978313 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-04 01:32:45.978425 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-04 01:32:45.978441 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-04 01:32:45.978454 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-04 01:32:45.978466 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-04 01:32:45.978477 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-04 01:32:45.978489 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-04 01:32:45.978500 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-04 01:32:45.978511 | orchestrator | 2026-02-04 01:32:45.978523 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-04 01:32:46.667194 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:46.667301 | orchestrator | 2026-02-04 01:32:46.667314 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-04 01:32:47.370474 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:47.370577 | orchestrator | 2026-02-04 01:32:47.370594 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-04 01:32:47.469308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-04 01:32:47.469402 | orchestrator | 2026-02-04 01:32:47.469418 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-04 01:32:48.822800 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-04 01:32:48.822935 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-04 01:32:48.822963 | orchestrator | 2026-02-04 01:32:48.822985 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-04 01:32:49.506205 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:49.506364 | orchestrator | 2026-02-04 01:32:49.506381 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-04 01:32:49.571298 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:32:49.571424 | orchestrator | 2026-02-04 01:32:49.571449 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-04 01:32:49.654383 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-04 01:32:49.654470 | orchestrator | 2026-02-04 01:32:49.654484 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-04 01:32:50.355279 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:50.355361 | orchestrator | 2026-02-04 01:32:50.355373 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-04 01:32:50.440117 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-04 01:32:50.440235 | orchestrator | 2026-02-04 01:32:50.440283 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-04 01:32:51.919125 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 01:32:51.919227 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 01:32:51.919281 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:51.919298 | orchestrator | 2026-02-04 01:32:51.919310 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-04 01:32:52.614839 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:52.614957 | orchestrator | 2026-02-04 01:32:52.614981 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-04 01:32:52.685794 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:32:52.685879 | orchestrator | 2026-02-04 01:32:52.685895 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-04 01:32:52.809674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-04 01:32:52.809772 | orchestrator | 2026-02-04 01:32:52.809790 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-04 01:32:53.407235 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:53.407383 | orchestrator | 2026-02-04 01:32:53.407396 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-04 01:32:53.887302 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:53.887449 | orchestrator | 2026-02-04 01:32:53.887470 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-04 01:32:55.304858 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-04 01:32:55.304996 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-04 01:32:55.305023 | orchestrator | 2026-02-04 01:32:55.305058 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-04 01:32:56.032405 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:56.032518 | orchestrator | 2026-02-04 01:32:56.032553 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-04 01:32:56.473768 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:56.473837 | orchestrator | 2026-02-04 01:32:56.473846 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-04 01:32:56.873307 | orchestrator | changed: [testbed-manager] 2026-02-04 01:32:56.873408 | orchestrator | 2026-02-04 01:32:56.873423 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-04 01:32:56.924449 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:32:56.924528 | orchestrator | 2026-02-04 01:32:56.924541 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-04 01:32:57.032058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-04 01:32:57.032188 | orchestrator | 2026-02-04 01:32:57.032206 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-04 01:32:57.074137 | orchestrator | ok: [testbed-manager] 2026-02-04 01:32:57.074279 | orchestrator | 2026-02-04 01:32:57.074304 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-04 01:32:59.237645 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-04 01:32:59.237751 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-04 01:32:59.237770 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-04 01:32:59.237783 | orchestrator | 2026-02-04 01:32:59.237796 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-04 01:33:00.026647 | orchestrator | changed: [testbed-manager] 2026-02-04 01:33:00.026742 | orchestrator | 2026-02-04 01:33:00.026757 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-04 01:33:00.799587 | orchestrator | changed: [testbed-manager] 2026-02-04 01:33:00.799708 | orchestrator | 2026-02-04 01:33:00.799727 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-04 01:33:01.563354 | orchestrator | changed: [testbed-manager] 2026-02-04 01:33:01.563444 | orchestrator | 2026-02-04 01:33:01.563454 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-04 01:33:01.653624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-04 01:33:01.653731 | orchestrator | 2026-02-04 01:33:01.653752 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-04 01:33:01.710466 | orchestrator | ok: [testbed-manager] 2026-02-04 01:33:01.710581 | orchestrator | 2026-02-04 01:33:01.710600 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-04 01:33:02.487673 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-04 01:33:02.487778 | orchestrator | 2026-02-04 01:33:02.487796 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-04 01:33:02.580159 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-04 01:33:02.580287 | orchestrator | 2026-02-04 01:33:02.580300 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-04 01:33:03.498975 | orchestrator | changed: [testbed-manager] 2026-02-04 01:33:03.499080 | orchestrator | 2026-02-04 01:33:03.499098 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-04 01:33:04.184135 | orchestrator | ok: [testbed-manager] 2026-02-04 01:33:04.184261 | orchestrator | 2026-02-04 01:33:04.184279 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-04 01:33:04.245640 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:33:04.245744 | orchestrator | 2026-02-04 01:33:04.245762 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-04 01:33:04.318321 | orchestrator | ok: [testbed-manager] 2026-02-04 01:33:04.318451 | orchestrator | 2026-02-04 01:33:04.318477 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-04 01:33:05.229565 | orchestrator | changed: [testbed-manager] 2026-02-04 01:33:05.229664 | orchestrator | 2026-02-04 01:33:05.229678 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-04 01:34:19.845507 | orchestrator | changed: [testbed-manager] 2026-02-04 01:34:19.845616 | orchestrator | 2026-02-04 01:34:19.845628 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-04 01:34:20.919315 | orchestrator | ok: [testbed-manager] 2026-02-04 01:34:20.919403 | orchestrator | 2026-02-04 01:34:20.919416 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-04 01:34:20.988999 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:34:20.989076 | orchestrator | 2026-02-04 01:34:20.989084 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-04 01:34:27.673184 | orchestrator | changed: [testbed-manager] 2026-02-04 01:34:27.673317 | orchestrator | 2026-02-04 01:34:27.673334 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-04 01:34:27.741402 | orchestrator | ok: [testbed-manager] 2026-02-04 01:34:27.741508 | orchestrator | 2026-02-04 01:34:27.741527 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-04 01:34:27.741542 | orchestrator | 2026-02-04 01:34:27.741553 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-04 01:34:27.918291 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:34:27.918380 | orchestrator | 2026-02-04 01:34:27.918393 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-04 01:35:27.980119 | orchestrator | Pausing for 60 seconds 2026-02-04 01:35:27.980251 | orchestrator | changed: [testbed-manager] 2026-02-04 01:35:27.980291 | orchestrator | 2026-02-04 01:35:27.980301 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-04 01:35:30.611505 | orchestrator | changed: [testbed-manager] 2026-02-04 01:35:30.611601 | orchestrator | 2026-02-04 01:35:30.611616 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-04 01:36:32.906656 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-04 01:36:32.906800 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-04 01:36:32.906853 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-04 01:36:32.906875 | orchestrator | changed: [testbed-manager] 2026-02-04 01:36:32.906896 | orchestrator | 2026-02-04 01:36:32.906915 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-04 01:36:44.530788 | orchestrator | changed: [testbed-manager] 2026-02-04 01:36:44.530931 | orchestrator | 2026-02-04 01:36:44.530958 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-04 01:36:44.632140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-04 01:36:44.632224 | orchestrator | 2026-02-04 01:36:44.632235 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-04 01:36:44.632244 | orchestrator | 2026-02-04 01:36:44.632252 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-04 01:36:44.680164 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:36:44.680285 | orchestrator | 2026-02-04 01:36:44.680315 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-04 01:36:44.752290 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-04 01:36:44.752378 | orchestrator | 2026-02-04 01:36:44.752391 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-04 01:36:45.512076 | orchestrator | changed: [testbed-manager] 2026-02-04 01:36:45.512152 | orchestrator | 2026-02-04 01:36:45.512160 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-04 01:36:48.686749 | orchestrator | ok: [testbed-manager] 2026-02-04 01:36:48.686840 | orchestrator | 2026-02-04 01:36:48.686862 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-04 01:36:48.768060 | orchestrator | ok: [testbed-manager] => { 2026-02-04 01:36:48.768167 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-04 01:36:48.768184 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-04 01:36:48.768195 | orchestrator | "Checking running containers against expected versions...", 2026-02-04 01:36:48.768206 | orchestrator | "", 2026-02-04 01:36:48.768217 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-04 01:36:48.768228 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-04 01:36:48.768241 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768253 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-04 01:36:48.768264 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768275 | orchestrator | "", 2026-02-04 01:36:48.768287 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-04 01:36:48.768326 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-04 01:36:48.768338 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768350 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-04 01:36:48.768361 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768372 | orchestrator | "", 2026-02-04 01:36:48.768383 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-04 01:36:48.768394 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-04 01:36:48.768405 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768416 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-04 01:36:48.768427 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768438 | orchestrator | "", 2026-02-04 01:36:48.768449 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-04 01:36:48.768460 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-04 01:36:48.768471 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768482 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-04 01:36:48.768493 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768507 | orchestrator | "", 2026-02-04 01:36:48.768528 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-04 01:36:48.768546 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-04 01:36:48.768564 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768584 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-04 01:36:48.768633 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768653 | orchestrator | "", 2026-02-04 01:36:48.768672 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-04 01:36:48.768689 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.768702 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768715 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.768728 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768740 | orchestrator | "", 2026-02-04 01:36:48.768753 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-04 01:36:48.768766 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-04 01:36:48.768778 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768791 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-04 01:36:48.768805 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768817 | orchestrator | "", 2026-02-04 01:36:48.768829 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-04 01:36:48.768843 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-04 01:36:48.768856 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768868 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-04 01:36:48.768881 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768894 | orchestrator | "", 2026-02-04 01:36:48.768906 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-04 01:36:48.768919 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-04 01:36:48.768929 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.768940 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-04 01:36:48.768951 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.768962 | orchestrator | "", 2026-02-04 01:36:48.768973 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-04 01:36:48.768983 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-04 01:36:48.768994 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.769010 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-04 01:36:48.769028 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.769046 | orchestrator | "", 2026-02-04 01:36:48.769064 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-04 01:36:48.769096 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769114 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.769133 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769150 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.769169 | orchestrator | "", 2026-02-04 01:36:48.769188 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-04 01:36:48.769205 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769223 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.769240 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769255 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.769272 | orchestrator | "", 2026-02-04 01:36:48.769288 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-04 01:36:48.769305 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769322 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.769338 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769354 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.769372 | orchestrator | "", 2026-02-04 01:36:48.769388 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-04 01:36:48.769404 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769421 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.769438 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769483 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.769503 | orchestrator | "", 2026-02-04 01:36:48.769521 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-04 01:36:48.769539 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769572 | orchestrator | " Enabled: true", 2026-02-04 01:36:48.769617 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-04 01:36:48.769636 | orchestrator | " Status: ✅ MATCH", 2026-02-04 01:36:48.769652 | orchestrator | "", 2026-02-04 01:36:48.769669 | orchestrator | "=== Summary ===", 2026-02-04 01:36:48.769686 | orchestrator | "Errors (version mismatches): 0", 2026-02-04 01:36:48.769703 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-04 01:36:48.769719 | orchestrator | "", 2026-02-04 01:36:48.769736 | orchestrator | "✅ All running containers match expected versions!" 2026-02-04 01:36:48.769753 | orchestrator | ] 2026-02-04 01:36:48.769771 | orchestrator | } 2026-02-04 01:36:48.769789 | orchestrator | 2026-02-04 01:36:48.769807 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-04 01:36:48.831196 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:36:48.831294 | orchestrator | 2026-02-04 01:36:48.831309 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:36:48.831323 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-04 01:36:48.831335 | orchestrator | 2026-02-04 01:36:48.959420 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 01:36:48.959535 | orchestrator | + deactivate 2026-02-04 01:36:48.959555 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-04 01:36:48.959573 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 01:36:48.959586 | orchestrator | + export PATH 2026-02-04 01:36:48.959649 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-04 01:36:48.959662 | orchestrator | + '[' -n '' ']' 2026-02-04 01:36:48.959674 | orchestrator | + hash -r 2026-02-04 01:36:48.959687 | orchestrator | + '[' -n '' ']' 2026-02-04 01:36:48.959725 | orchestrator | + unset VIRTUAL_ENV 2026-02-04 01:36:48.959741 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-04 01:36:48.959755 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-04 01:36:48.959768 | orchestrator | + unset -f deactivate 2026-02-04 01:36:48.959784 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-04 01:36:48.967784 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 01:36:48.967878 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-04 01:36:48.967921 | orchestrator | + local max_attempts=60 2026-02-04 01:36:48.967933 | orchestrator | + local name=ceph-ansible 2026-02-04 01:36:48.967942 | orchestrator | + local attempt_num=1 2026-02-04 01:36:48.968558 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:36:49.009940 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:36:49.010010 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-04 01:36:49.010061 | orchestrator | + local max_attempts=60 2026-02-04 01:36:49.010068 | orchestrator | + local name=kolla-ansible 2026-02-04 01:36:49.010074 | orchestrator | + local attempt_num=1 2026-02-04 01:36:49.011300 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-04 01:36:49.057978 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:36:49.058134 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-04 01:36:49.058154 | orchestrator | + local max_attempts=60 2026-02-04 01:36:49.058170 | orchestrator | + local name=osism-ansible 2026-02-04 01:36:49.058184 | orchestrator | + local attempt_num=1 2026-02-04 01:36:49.059023 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-04 01:36:49.091737 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:36:49.091807 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-04 01:36:49.091813 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-04 01:36:49.756841 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-04 01:36:49.934679 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-04 01:36:49.934754 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-04 01:36:49.934762 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-04 01:36:49.934768 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-04 01:36:49.934774 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-04 01:36:49.934792 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-04 01:36:49.934797 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-04 01:36:49.934801 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-04 01:36:49.934804 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-04 01:36:49.934808 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-04 01:36:49.934812 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-04 01:36:49.934816 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-04 01:36:49.934820 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-04 01:36:49.934842 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-04 01:36:49.934846 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-04 01:36:49.934850 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-04 01:36:49.940877 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-04 01:36:50.001062 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 01:36:50.001141 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-04 01:36:50.006186 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-04 01:37:02.514336 | orchestrator | 2026-02-04 01:37:02 | INFO  | Task cdbc9e65-f9a9-469a-941f-7299756d4224 (resolvconf) was prepared for execution. 2026-02-04 01:37:02.514482 | orchestrator | 2026-02-04 01:37:02 | INFO  | It takes a moment until task cdbc9e65-f9a9-469a-941f-7299756d4224 (resolvconf) has been started and output is visible here. 2026-02-04 01:37:17.731415 | orchestrator | 2026-02-04 01:37:17.731532 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-04 01:37:17.731549 | orchestrator | 2026-02-04 01:37:17.731562 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:37:17.731575 | orchestrator | Wednesday 04 February 2026 01:37:07 +0000 (0:00:00.168) 0:00:00.168 **** 2026-02-04 01:37:17.731588 | orchestrator | ok: [testbed-manager] 2026-02-04 01:37:17.731601 | orchestrator | 2026-02-04 01:37:17.731613 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-04 01:37:17.731626 | orchestrator | Wednesday 04 February 2026 01:37:10 +0000 (0:00:03.875) 0:00:04.043 **** 2026-02-04 01:37:17.731638 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:37:17.731652 | orchestrator | 2026-02-04 01:37:17.731664 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-04 01:37:17.731676 | orchestrator | Wednesday 04 February 2026 01:37:10 +0000 (0:00:00.064) 0:00:04.108 **** 2026-02-04 01:37:17.731688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-04 01:37:17.731702 | orchestrator | 2026-02-04 01:37:17.731714 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-04 01:37:17.731776 | orchestrator | Wednesday 04 February 2026 01:37:11 +0000 (0:00:00.100) 0:00:04.208 **** 2026-02-04 01:37:17.731809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 01:37:17.731822 | orchestrator | 2026-02-04 01:37:17.731834 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-04 01:37:17.731845 | orchestrator | Wednesday 04 February 2026 01:37:11 +0000 (0:00:00.076) 0:00:04.285 **** 2026-02-04 01:37:17.731856 | orchestrator | ok: [testbed-manager] 2026-02-04 01:37:17.731868 | orchestrator | 2026-02-04 01:37:17.731880 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-04 01:37:17.731891 | orchestrator | Wednesday 04 February 2026 01:37:12 +0000 (0:00:01.281) 0:00:05.567 **** 2026-02-04 01:37:17.731902 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:37:17.731914 | orchestrator | 2026-02-04 01:37:17.731926 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-04 01:37:17.731937 | orchestrator | Wednesday 04 February 2026 01:37:12 +0000 (0:00:00.066) 0:00:05.633 **** 2026-02-04 01:37:17.731978 | orchestrator | ok: [testbed-manager] 2026-02-04 01:37:17.731991 | orchestrator | 2026-02-04 01:37:17.732004 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-04 01:37:17.732017 | orchestrator | Wednesday 04 February 2026 01:37:13 +0000 (0:00:00.557) 0:00:06.190 **** 2026-02-04 01:37:17.732030 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:37:17.732044 | orchestrator | 2026-02-04 01:37:17.732057 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-04 01:37:17.732072 | orchestrator | Wednesday 04 February 2026 01:37:13 +0000 (0:00:00.082) 0:00:06.273 **** 2026-02-04 01:37:17.732085 | orchestrator | changed: [testbed-manager] 2026-02-04 01:37:17.732098 | orchestrator | 2026-02-04 01:37:17.732111 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-04 01:37:17.732126 | orchestrator | Wednesday 04 February 2026 01:37:13 +0000 (0:00:00.618) 0:00:06.892 **** 2026-02-04 01:37:17.732139 | orchestrator | changed: [testbed-manager] 2026-02-04 01:37:17.732152 | orchestrator | 2026-02-04 01:37:17.732166 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-04 01:37:17.732179 | orchestrator | Wednesday 04 February 2026 01:37:14 +0000 (0:00:01.095) 0:00:07.988 **** 2026-02-04 01:37:17.732193 | orchestrator | ok: [testbed-manager] 2026-02-04 01:37:17.732205 | orchestrator | 2026-02-04 01:37:17.732217 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-04 01:37:17.732228 | orchestrator | Wednesday 04 February 2026 01:37:16 +0000 (0:00:01.158) 0:00:09.146 **** 2026-02-04 01:37:17.732240 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-04 01:37:17.732251 | orchestrator | 2026-02-04 01:37:17.732263 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-04 01:37:17.732274 | orchestrator | Wednesday 04 February 2026 01:37:16 +0000 (0:00:00.088) 0:00:09.234 **** 2026-02-04 01:37:17.732285 | orchestrator | changed: [testbed-manager] 2026-02-04 01:37:17.732297 | orchestrator | 2026-02-04 01:37:17.732308 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:37:17.732320 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 01:37:17.732332 | orchestrator | 2026-02-04 01:37:17.732343 | orchestrator | 2026-02-04 01:37:17.732354 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:37:17.732365 | orchestrator | Wednesday 04 February 2026 01:37:17 +0000 (0:00:01.347) 0:00:10.582 **** 2026-02-04 01:37:17.732377 | orchestrator | =============================================================================== 2026-02-04 01:37:17.732388 | orchestrator | Gathering Facts --------------------------------------------------------- 3.88s 2026-02-04 01:37:17.732399 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.35s 2026-02-04 01:37:17.732410 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.28s 2026-02-04 01:37:17.732421 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.16s 2026-02-04 01:37:17.732432 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.10s 2026-02-04 01:37:17.732444 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.62s 2026-02-04 01:37:17.732473 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-02-04 01:37:17.732486 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.10s 2026-02-04 01:37:17.732497 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-02-04 01:37:17.732508 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-04 01:37:17.732519 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-02-04 01:37:17.732530 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-04 01:37:17.732549 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2026-02-04 01:37:18.112563 | orchestrator | + osism apply sshconfig 2026-02-04 01:37:30.388450 | orchestrator | 2026-02-04 01:37:30 | INFO  | Task 5d9d13de-fe1a-46c1-ae9d-6d622f3663e9 (sshconfig) was prepared for execution. 2026-02-04 01:37:30.388541 | orchestrator | 2026-02-04 01:37:30 | INFO  | It takes a moment until task 5d9d13de-fe1a-46c1-ae9d-6d622f3663e9 (sshconfig) has been started and output is visible here. 2026-02-04 01:37:43.122421 | orchestrator | 2026-02-04 01:37:43.122504 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-04 01:37:43.122512 | orchestrator | 2026-02-04 01:37:43.122531 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-04 01:37:43.122535 | orchestrator | Wednesday 04 February 2026 01:37:34 +0000 (0:00:00.175) 0:00:00.175 **** 2026-02-04 01:37:43.122540 | orchestrator | ok: [testbed-manager] 2026-02-04 01:37:43.122545 | orchestrator | 2026-02-04 01:37:43.122550 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-04 01:37:43.122554 | orchestrator | Wednesday 04 February 2026 01:37:35 +0000 (0:00:00.607) 0:00:00.783 **** 2026-02-04 01:37:43.122558 | orchestrator | changed: [testbed-manager] 2026-02-04 01:37:43.122563 | orchestrator | 2026-02-04 01:37:43.122567 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-04 01:37:43.122571 | orchestrator | Wednesday 04 February 2026 01:37:36 +0000 (0:00:00.581) 0:00:01.364 **** 2026-02-04 01:37:43.122575 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-04 01:37:43.122579 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-04 01:37:43.122584 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-04 01:37:43.122587 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-04 01:37:43.122591 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-04 01:37:43.122595 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-04 01:37:43.122599 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-04 01:37:43.122602 | orchestrator | 2026-02-04 01:37:43.122606 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-04 01:37:43.122610 | orchestrator | Wednesday 04 February 2026 01:37:42 +0000 (0:00:06.145) 0:00:07.510 **** 2026-02-04 01:37:43.122614 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:37:43.122617 | orchestrator | 2026-02-04 01:37:43.122621 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-04 01:37:43.122625 | orchestrator | Wednesday 04 February 2026 01:37:42 +0000 (0:00:00.076) 0:00:07.587 **** 2026-02-04 01:37:43.122629 | orchestrator | changed: [testbed-manager] 2026-02-04 01:37:43.122633 | orchestrator | 2026-02-04 01:37:43.122637 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:37:43.122642 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:37:43.122646 | orchestrator | 2026-02-04 01:37:43.122650 | orchestrator | 2026-02-04 01:37:43.122654 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:37:43.122658 | orchestrator | Wednesday 04 February 2026 01:37:42 +0000 (0:00:00.556) 0:00:08.144 **** 2026-02-04 01:37:43.122662 | orchestrator | =============================================================================== 2026-02-04 01:37:43.122665 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.15s 2026-02-04 01:37:43.122669 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.61s 2026-02-04 01:37:43.122673 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.58s 2026-02-04 01:37:43.122677 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-02-04 01:37:43.122697 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-02-04 01:37:43.495416 | orchestrator | + osism apply known-hosts 2026-02-04 01:37:55.779026 | orchestrator | 2026-02-04 01:37:55 | INFO  | Task de978d33-3603-4ac8-b70e-7bf99157e621 (known-hosts) was prepared for execution. 2026-02-04 01:37:55.779137 | orchestrator | 2026-02-04 01:37:55 | INFO  | It takes a moment until task de978d33-3603-4ac8-b70e-7bf99157e621 (known-hosts) has been started and output is visible here. 2026-02-04 01:38:14.049658 | orchestrator | 2026-02-04 01:38:14.049767 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-04 01:38:14.049783 | orchestrator | 2026-02-04 01:38:14.049793 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-04 01:38:14.049803 | orchestrator | Wednesday 04 February 2026 01:38:00 +0000 (0:00:00.229) 0:00:00.229 **** 2026-02-04 01:38:14.049809 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-04 01:38:14.049816 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-04 01:38:14.049822 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-04 01:38:14.049828 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-04 01:38:14.049834 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-04 01:38:14.049839 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-04 01:38:14.049844 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-04 01:38:14.049850 | orchestrator | 2026-02-04 01:38:14.049855 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-04 01:38:14.049862 | orchestrator | Wednesday 04 February 2026 01:38:06 +0000 (0:00:06.326) 0:00:06.556 **** 2026-02-04 01:38:14.049869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-04 01:38:14.049876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-04 01:38:14.049882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-04 01:38:14.049887 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-04 01:38:14.049893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-04 01:38:14.049906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-04 01:38:14.049912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-04 01:38:14.049917 | orchestrator | 2026-02-04 01:38:14.049923 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:14.049928 | orchestrator | Wednesday 04 February 2026 01:38:06 +0000 (0:00:00.172) 0:00:06.728 **** 2026-02-04 01:38:14.049939 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZOhBsflscoHoD4waMR/lFbDpLqvSxT9oTK92p4SgNYQV05nMv/9eQfsoW1J9xpWiS1MWoTWdj+gPXkbEFGjYRhZJpMb/bl6W/nHoF77tVF9MOXIgRdKiIBPrl/5d+TOPGZvBrnoe/Gb4fPETCn+S+vWVz9e5huw9frVqObxKl/CCuUU7YAJZziiGkr66N8nqrjcBEjS+SiJNBPkeVI5xmQ9VEPXezt3WEVFhdDo16FF1NKsfWkiLp2ptzUSYcxGT3ejXClEgGbzERH1/P+lBIrnQsgODpgRBfM1aVaKrThG8I/iC/wB2zgxNKcPbaO92uMStytmaU1qBN8LTxqM354I6N8iXdvhCYJfE/NFl9LbqUMJNJklGXMUGV/0VlEOEfZg0nLvLOoPF4OXaT0iKsdaLOhD5uchw/8NPcyFWArNY9M0GaHam7TWYxsFkNyYyr4WlveOnTjtyWigbvVszByA/MXkTVzQ2XStvsruA3vliNz/VJPcGUOWP50/1ILPk=) 2026-02-04 01:38:14.050079 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFbR38ITHEPwNypBX6by7oQQJutanTSsX0c060d9zjgoCk4t1xcy5qjJ9mqIBDbwNkjuj7wMMiHJC6VvbGFR2DM=) 2026-02-04 01:38:14.050091 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBknmAUjSPShP9c+DuPPHRnLU8wnQi+EOqKxxXY20MNX) 2026-02-04 01:38:14.050098 | orchestrator | 2026-02-04 01:38:14.050104 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:14.050109 | orchestrator | Wednesday 04 February 2026 01:38:08 +0000 (0:00:01.321) 0:00:08.050 **** 2026-02-04 01:38:14.050115 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPlYePBh/hl3lItpxjlBVtiKXKZBumxrWLIHEEgIIHVE) 2026-02-04 01:38:14.050144 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv4S7zjb+Z1xVfElcvi0rCflyInlHJbnyEdrCybf9B4MZakrvkHYO0CIyCtkxF3qjWFJrBbOwyrn8c3iS3sqW4g8BOL8v46fpHAeOG2nzio/iy8i12KG27jdM9o6Nhfkdsw0zEkw7ITti4+Oa/nj4pdafmwFIcxVLJkgbPuxTiDCmlP2YvDhhz7PM8zuHM8LO5dUMZEjABqz2WGLMDNHuIduZnFsD/H63gSfwsjDeC6RCsBYWA/l7qzAJtN0ErvKOqkyfJLNWCkqg+Jjo1kvpJu7yyjgphXPF1K1u2CHFSnhBOyPq6krgUODa3nJWuQHnqARImY3oQDkVrdafl7Xr6tvpdFmkzd/SOZB6THOY+MwcluItwOS3+sE06bTW4zZEG1H1CWyRTnQ6+HyjuHcK5cKha4wdiL5RFg/nT1JE7RIuvGEMb74HDhLo758I8oJ4+V362P2SK+1FTGAH6s/iplRutGrk60DtJ2jvKKvlgeKX60wEdTEDwjnPXSO/4MfU=) 2026-02-04 01:38:14.050151 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLRd9poCVvSZ/mNodmVbg31P8jRJPyF2HBONc0FxYSZC/YfHNr4zriPRp6uXLCgGwOayHG3+NlcvMiwJW4YyFU8=) 2026-02-04 01:38:14.050157 | orchestrator | 2026-02-04 01:38:14.050162 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:14.050168 | orchestrator | Wednesday 04 February 2026 01:38:09 +0000 (0:00:01.157) 0:00:09.208 **** 2026-02-04 01:38:14.050174 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmI6RGzyRtIx/geW6UyvDQZJbr8rRhbKGH0tBkUTxdGZ/JkglB3RA4e7ji9exP5ZwZ0DRxmzJ62KJeOtw1IBvuj14m4aReeb+sInjOB7Dh1xUBaUmD2hLAXoC7Ql+6O9lKsjB9piRSYT3LPWcAWcEj2iPBl0AXY7EubDwTl+ES61bIkH/uSLsFcw33mXYIBf2nce7eQMBx8AuT/lSiuFv7M5UmuPN+U7Ej6MtI+lw4tgjYi0Uw0L4FT+TzlLY18DV2rauvvHiA1yO/WMT82AukFQ5ygp2T67qYY/RVt513L8SWFrPtUPWmMlaZ0oA09KA6+WuFf8zXPQxRjjaxC1V+XASUx/nXHwPVsyHNMlnJc1zTyYo3/+OzKf1THe0JZchcRUU/6y3NC4v39BbSO20IWkp5ZBoOnD8YmQsbVBLwWMDK50xpiNPJdM3sDIdpMt+rEBJA7hmhIwk3wz3w4X/Pjf9XSpo+1v9GoorRxKiu3CuFFNWw+oW/Y3eH+J+dCaM=) 2026-02-04 01:38:14.050179 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPXDQiq76NYRY1eqLJMfop9mTb+NaAF8WPJKKU7jROq) 2026-02-04 01:38:14.050185 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEQAZ01dL3Uh//pE2L2V21TUZ/huqMk8gZ/FkUeKxupxZlqCNENdKteWHQhmlxOjZZnyyukH1cknhMFoYSbHTBg=) 2026-02-04 01:38:14.050192 | orchestrator | 2026-02-04 01:38:14.050198 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:14.050205 | orchestrator | Wednesday 04 February 2026 01:38:10 +0000 (0:00:01.149) 0:00:10.357 **** 2026-02-04 01:38:14.050211 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPAyAZry/uh9JsmCwHF9L/rnA8Qn0pzW4ECZZiYD3fRAFE3P8HZAqxmhAvngLuN0xsxPiu+9iTTfij+ZVdm+9lM=) 2026-02-04 01:38:14.050218 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWTmuNCt2UaBVLalkwZAixy7yDiI507xmw3O2CO4Fw+a7hdUVCBtFxLoPkrR8qtUP62q7Ipc1s8FTi8RWpOkL6eYx9d8gwfxJo5lcGKKdnl84qnURA2Tbd8vZquIuTl2Es6X0ZMfKzAhhG/LPnPGozg/h2vir9+L+VzyLhkTFyL7aigpfdeCDufdq6D9Xix4hL2/M0fIFc+sqgQlyMu32CpK2H4N28i7WU7Z5amEr/SdF0luZfz/9WWFrMl6IPcHS/sur+MZdxjwvMoA1q4v587mZaw+wVNk6Bt6nl0plSGt2qajJeivN9wuWeeDUvHrR85BDvreAaJiRA4LZveVVbS2/W6rLSlW53IfJqLxuTTAia5kdtG2P6GSm8P/vJj3FVOk9tW5LGIDHvK4K/aU6uXa+4ZbZvEeM7VHhE2wYcrpJE903NemUrQ/PMcrrzi0xh1+74zvPytIqo80iszE17Pmw6JDpiohpo4EBSkvmkfVhw01kz192avseGtSU3XO0=) 2026-02-04 01:38:14.050232 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINO+t71Hl7vYvisZa7B311OHkVoJguzB9VMJorwBLaHP) 2026-02-04 01:38:14.050238 | orchestrator | 2026-02-04 01:38:14.050245 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:14.050252 | orchestrator | Wednesday 04 February 2026 01:38:11 +0000 (0:00:01.182) 0:00:11.540 **** 2026-02-04 01:38:14.050307 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK8vFfj/Ma3epNeoH7fLNzgEdwDaIOK+FLS8bdfjbEI5au4pjuv9rwooU12kCo3Mt9wgsuJohwBpmV0Ht0fhrrE=) 2026-02-04 01:38:14.050313 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGUj/V0zhGvAdJRAD81/4XRWfST+jBBCSpqWHFWvkXln) 2026-02-04 01:38:14.050319 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQvnYk3eHvhjU7DrEjF70xPgYuLoEdbjgExOZWpezTbASBti3e7pk0ykR06AG0MuIvR66PNWx7XSCTp5KSJvGfJ8HB2DZJqRgd2TVYADOFmBY6nGfMxFhtdlilKS41nkzl4w8Zr+pjEyEDA2jYfSTPUB3eUPsCDUz9NsqkKgetJViccmxEbcgUHpXYlRzVVg9ONIhW0lZTeqzjrLoJvX9wPS9VfYTKSZghxC8dEPOaW0Yk0DbtL3pBhpH5r+sRiqUg8AnL5sDlzw6MXw+nhYE3i9z7M/9anTcXys2DM1GtoQiYKoPuz+LdDYYYzeCLhOEi9HT4lXQMI9+OT1oYuR2mgV2xPYhilHOtqkoiGkQW6mSQnQiS9SzMZsL2XvHQbOXwZolys/y/F42PasJ6F64Fl5nmQVy0GdnVv3oYwB4qSVw19i6JlqRfvCIXfIFjsxDYuOb41Iiu7tDuA3aInAJcWwtTWMwl5yxWrh+by+bG/vZulfkmEOIOPwEJaYA+GEk=) 2026-02-04 01:38:14.050325 | orchestrator | 2026-02-04 01:38:14.050330 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:14.050336 | orchestrator | Wednesday 04 February 2026 01:38:12 +0000 (0:00:01.202) 0:00:12.742 **** 2026-02-04 01:38:14.050346 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLqDSwZYKrp64v/nrVKypRLfXXWglk65/ccDQLQiaXiUhNLz+X8pfNN4sZO4OpDxMrIAknT0We4jK7xB9XtuZ7ZcTzyjsKsI3dXrV2+F6jtHOHQZWfTUMATtw2NWE6pdeII1GjxCkdxQJaOGwSzooBjOgl9llMO/iv9Zr+jkQFexVFfq2uTPsGF11uCeYTrc1NUIv3pPrsiKadMIOcJOSqFlRSJS1Djwba4LpENpXK6+IT+7VXjDn6zpxU6r2WXZt72Nu0B+kSCg8UgBs3/Min4eGG31//Yw3eZBk2TbLgRyjiZG8z71P2P5y4giFdvMV0nOl7SO/9EMtlw3ki4/Ip4J1/+N2XvveFwVeYwaIVPxbW/NYWYULAZT7RN3mtujgQqaps5TNJ0VUpVE4c24X0wfUScqG8J2pRAf5emRtOnFUh/D4fVbc2vKZSvow2nhWwvkIV7HtnAaeEWt0LYfsjTnrUrOGBOc4qJaR+k0zWfHNwW1PIDuHnEay9vi7dQls=) 2026-02-04 01:38:25.671885 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNJv03SE6MEk/lOWSUaylrQo4ZwAQXi5SRQVwFamHGsl0JEIOhlnM3DVz6LMBkfxbsUndD+6YG/XY525HkwtJk8=) 2026-02-04 01:38:25.671977 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILpbrOoCDpnslRx9e9Vbpj4blwukDasJRe68C2ysFW+Q) 2026-02-04 01:38:25.671987 | orchestrator | 2026-02-04 01:38:25.671994 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:25.672043 | orchestrator | Wednesday 04 February 2026 01:38:14 +0000 (0:00:01.193) 0:00:13.936 **** 2026-02-04 01:38:25.672054 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyg3MzOzdlfR4LYv/LAXT7mupuJ3bcaYnmVXysE14iQmvrw8VW4V40VkZ3SV7U+tYDwRWT0WLhEUFqf2SjhiPDWz9nEsorBNtVYkcamkpawjfdnMxfqPXksFl0941MzSzvGhBmT8F9IEMKpC3lciZ6+voE0wWqvwFbZ5WbPq8zwyjEru9XOZSZmZC1gHm+H3G8CDAhM7aM7X9Ws6/kC9KlN+OYxIxMjo8VM6CqqGPbM9wnptXDh0Tt1rkl3OjnAnhK2WzRGNrbISs7IbNZ6LtczLmVUTqQuKH4E+ciE6wuOrurfxk3YvIPiGS3pJd9BpztMPwIWIyZZLTUvo0DbcTEq1D3R51N3mTlxq9GUXn2gozskMa3Kyj1Jad+AghrMedByEtGuAbj1dQtRIwlUyRkhe+LPA0E6Dxdqh7ZwW5M3DkRj4FkFCnTxuceq1dCIGykIYAyde4sT0U13cwsYWUoaJzYpgbNR69LVmxPFi61NKnP4OpmOxhgjqCJ2m7w6is=) 2026-02-04 01:38:25.672065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO3zOXb90131+SXHsLAM5hPFYhBSAd1y3tDtyM8x9H7E) 2026-02-04 01:38:25.672099 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEt0KLyjNVMxKnX1ndvPrSZz+ohDwHb4EfQPUF4X/TOzovZDqtoxIeVJJlGiql5aol2OgWXcT/u1oSkPpKARHM8=) 2026-02-04 01:38:25.672107 | orchestrator | 2026-02-04 01:38:25.672115 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-04 01:38:25.672125 | orchestrator | Wednesday 04 February 2026 01:38:15 +0000 (0:00:01.145) 0:00:15.081 **** 2026-02-04 01:38:25.672134 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-04 01:38:25.672143 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-04 01:38:25.672151 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-04 01:38:25.672160 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-04 01:38:25.672169 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-04 01:38:25.672178 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-04 01:38:25.672187 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-04 01:38:25.672196 | orchestrator | 2026-02-04 01:38:25.672205 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-04 01:38:25.672215 | orchestrator | Wednesday 04 February 2026 01:38:20 +0000 (0:00:05.634) 0:00:20.716 **** 2026-02-04 01:38:25.672225 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-04 01:38:25.672236 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-04 01:38:25.672245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-04 01:38:25.672255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-04 01:38:25.672264 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-04 01:38:25.672274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-04 01:38:25.672282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-04 01:38:25.672287 | orchestrator | 2026-02-04 01:38:25.672292 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:25.672298 | orchestrator | Wednesday 04 February 2026 01:38:21 +0000 (0:00:00.196) 0:00:20.912 **** 2026-02-04 01:38:25.672303 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFbR38ITHEPwNypBX6by7oQQJutanTSsX0c060d9zjgoCk4t1xcy5qjJ9mqIBDbwNkjuj7wMMiHJC6VvbGFR2DM=) 2026-02-04 01:38:25.672338 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZOhBsflscoHoD4waMR/lFbDpLqvSxT9oTK92p4SgNYQV05nMv/9eQfsoW1J9xpWiS1MWoTWdj+gPXkbEFGjYRhZJpMb/bl6W/nHoF77tVF9MOXIgRdKiIBPrl/5d+TOPGZvBrnoe/Gb4fPETCn+S+vWVz9e5huw9frVqObxKl/CCuUU7YAJZziiGkr66N8nqrjcBEjS+SiJNBPkeVI5xmQ9VEPXezt3WEVFhdDo16FF1NKsfWkiLp2ptzUSYcxGT3ejXClEgGbzERH1/P+lBIrnQsgODpgRBfM1aVaKrThG8I/iC/wB2zgxNKcPbaO92uMStytmaU1qBN8LTxqM354I6N8iXdvhCYJfE/NFl9LbqUMJNJklGXMUGV/0VlEOEfZg0nLvLOoPF4OXaT0iKsdaLOhD5uchw/8NPcyFWArNY9M0GaHam7TWYxsFkNyYyr4WlveOnTjtyWigbvVszByA/MXkTVzQ2XStvsruA3vliNz/VJPcGUOWP50/1ILPk=) 2026-02-04 01:38:25.672351 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBknmAUjSPShP9c+DuPPHRnLU8wnQi+EOqKxxXY20MNX) 2026-02-04 01:38:25.672357 | orchestrator | 2026-02-04 01:38:25.672362 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:25.672368 | orchestrator | Wednesday 04 February 2026 01:38:22 +0000 (0:00:01.194) 0:00:22.107 **** 2026-02-04 01:38:25.672381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLRd9poCVvSZ/mNodmVbg31P8jRJPyF2HBONc0FxYSZC/YfHNr4zriPRp6uXLCgGwOayHG3+NlcvMiwJW4YyFU8=) 2026-02-04 01:38:25.672390 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPlYePBh/hl3lItpxjlBVtiKXKZBumxrWLIHEEgIIHVE) 2026-02-04 01:38:25.672399 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCv4S7zjb+Z1xVfElcvi0rCflyInlHJbnyEdrCybf9B4MZakrvkHYO0CIyCtkxF3qjWFJrBbOwyrn8c3iS3sqW4g8BOL8v46fpHAeOG2nzio/iy8i12KG27jdM9o6Nhfkdsw0zEkw7ITti4+Oa/nj4pdafmwFIcxVLJkgbPuxTiDCmlP2YvDhhz7PM8zuHM8LO5dUMZEjABqz2WGLMDNHuIduZnFsD/H63gSfwsjDeC6RCsBYWA/l7qzAJtN0ErvKOqkyfJLNWCkqg+Jjo1kvpJu7yyjgphXPF1K1u2CHFSnhBOyPq6krgUODa3nJWuQHnqARImY3oQDkVrdafl7Xr6tvpdFmkzd/SOZB6THOY+MwcluItwOS3+sE06bTW4zZEG1H1CWyRTnQ6+HyjuHcK5cKha4wdiL5RFg/nT1JE7RIuvGEMb74HDhLo758I8oJ4+V362P2SK+1FTGAH6s/iplRutGrk60DtJ2jvKKvlgeKX60wEdTEDwjnPXSO/4MfU=) 2026-02-04 01:38:25.672409 | orchestrator | 2026-02-04 01:38:25.672417 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:25.672426 | orchestrator | Wednesday 04 February 2026 01:38:23 +0000 (0:00:01.135) 0:00:23.242 **** 2026-02-04 01:38:25.672435 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMPXDQiq76NYRY1eqLJMfop9mTb+NaAF8WPJKKU7jROq) 2026-02-04 01:38:25.672445 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCmI6RGzyRtIx/geW6UyvDQZJbr8rRhbKGH0tBkUTxdGZ/JkglB3RA4e7ji9exP5ZwZ0DRxmzJ62KJeOtw1IBvuj14m4aReeb+sInjOB7Dh1xUBaUmD2hLAXoC7Ql+6O9lKsjB9piRSYT3LPWcAWcEj2iPBl0AXY7EubDwTl+ES61bIkH/uSLsFcw33mXYIBf2nce7eQMBx8AuT/lSiuFv7M5UmuPN+U7Ej6MtI+lw4tgjYi0Uw0L4FT+TzlLY18DV2rauvvHiA1yO/WMT82AukFQ5ygp2T67qYY/RVt513L8SWFrPtUPWmMlaZ0oA09KA6+WuFf8zXPQxRjjaxC1V+XASUx/nXHwPVsyHNMlnJc1zTyYo3/+OzKf1THe0JZchcRUU/6y3NC4v39BbSO20IWkp5ZBoOnD8YmQsbVBLwWMDK50xpiNPJdM3sDIdpMt+rEBJA7hmhIwk3wz3w4X/Pjf9XSpo+1v9GoorRxKiu3CuFFNWw+oW/Y3eH+J+dCaM=) 2026-02-04 01:38:25.672454 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEQAZ01dL3Uh//pE2L2V21TUZ/huqMk8gZ/FkUeKxupxZlqCNENdKteWHQhmlxOjZZnyyukH1cknhMFoYSbHTBg=) 2026-02-04 01:38:25.672463 | orchestrator | 2026-02-04 01:38:25.672471 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:25.672480 | orchestrator | Wednesday 04 February 2026 01:38:24 +0000 (0:00:01.142) 0:00:24.385 **** 2026-02-04 01:38:25.672488 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINO+t71Hl7vYvisZa7B311OHkVoJguzB9VMJorwBLaHP) 2026-02-04 01:38:25.672497 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCWTmuNCt2UaBVLalkwZAixy7yDiI507xmw3O2CO4Fw+a7hdUVCBtFxLoPkrR8qtUP62q7Ipc1s8FTi8RWpOkL6eYx9d8gwfxJo5lcGKKdnl84qnURA2Tbd8vZquIuTl2Es6X0ZMfKzAhhG/LPnPGozg/h2vir9+L+VzyLhkTFyL7aigpfdeCDufdq6D9Xix4hL2/M0fIFc+sqgQlyMu32CpK2H4N28i7WU7Z5amEr/SdF0luZfz/9WWFrMl6IPcHS/sur+MZdxjwvMoA1q4v587mZaw+wVNk6Bt6nl0plSGt2qajJeivN9wuWeeDUvHrR85BDvreAaJiRA4LZveVVbS2/W6rLSlW53IfJqLxuTTAia5kdtG2P6GSm8P/vJj3FVOk9tW5LGIDHvK4K/aU6uXa+4ZbZvEeM7VHhE2wYcrpJE903NemUrQ/PMcrrzi0xh1+74zvPytIqo80iszE17Pmw6JDpiohpo4EBSkvmkfVhw01kz192avseGtSU3XO0=) 2026-02-04 01:38:25.672518 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPAyAZry/uh9JsmCwHF9L/rnA8Qn0pzW4ECZZiYD3fRAFE3P8HZAqxmhAvngLuN0xsxPiu+9iTTfij+ZVdm+9lM=) 2026-02-04 01:38:30.570183 | orchestrator | 2026-02-04 01:38:30.570272 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:30.570285 | orchestrator | Wednesday 04 February 2026 01:38:25 +0000 (0:00:01.173) 0:00:25.558 **** 2026-02-04 01:38:30.570294 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK8vFfj/Ma3epNeoH7fLNzgEdwDaIOK+FLS8bdfjbEI5au4pjuv9rwooU12kCo3Mt9wgsuJohwBpmV0Ht0fhrrE=) 2026-02-04 01:38:30.570305 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQvnYk3eHvhjU7DrEjF70xPgYuLoEdbjgExOZWpezTbASBti3e7pk0ykR06AG0MuIvR66PNWx7XSCTp5KSJvGfJ8HB2DZJqRgd2TVYADOFmBY6nGfMxFhtdlilKS41nkzl4w8Zr+pjEyEDA2jYfSTPUB3eUPsCDUz9NsqkKgetJViccmxEbcgUHpXYlRzVVg9ONIhW0lZTeqzjrLoJvX9wPS9VfYTKSZghxC8dEPOaW0Yk0DbtL3pBhpH5r+sRiqUg8AnL5sDlzw6MXw+nhYE3i9z7M/9anTcXys2DM1GtoQiYKoPuz+LdDYYYzeCLhOEi9HT4lXQMI9+OT1oYuR2mgV2xPYhilHOtqkoiGkQW6mSQnQiS9SzMZsL2XvHQbOXwZolys/y/F42PasJ6F64Fl5nmQVy0GdnVv3oYwB4qSVw19i6JlqRfvCIXfIFjsxDYuOb41Iiu7tDuA3aInAJcWwtTWMwl5yxWrh+by+bG/vZulfkmEOIOPwEJaYA+GEk=) 2026-02-04 01:38:30.570315 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGUj/V0zhGvAdJRAD81/4XRWfST+jBBCSpqWHFWvkXln) 2026-02-04 01:38:30.570323 | orchestrator | 2026-02-04 01:38:30.570330 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:30.570347 | orchestrator | Wednesday 04 February 2026 01:38:26 +0000 (0:00:01.177) 0:00:26.736 **** 2026-02-04 01:38:30.570368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLqDSwZYKrp64v/nrVKypRLfXXWglk65/ccDQLQiaXiUhNLz+X8pfNN4sZO4OpDxMrIAknT0We4jK7xB9XtuZ7ZcTzyjsKsI3dXrV2+F6jtHOHQZWfTUMATtw2NWE6pdeII1GjxCkdxQJaOGwSzooBjOgl9llMO/iv9Zr+jkQFexVFfq2uTPsGF11uCeYTrc1NUIv3pPrsiKadMIOcJOSqFlRSJS1Djwba4LpENpXK6+IT+7VXjDn6zpxU6r2WXZt72Nu0B+kSCg8UgBs3/Min4eGG31//Yw3eZBk2TbLgRyjiZG8z71P2P5y4giFdvMV0nOl7SO/9EMtlw3ki4/Ip4J1/+N2XvveFwVeYwaIVPxbW/NYWYULAZT7RN3mtujgQqaps5TNJ0VUpVE4c24X0wfUScqG8J2pRAf5emRtOnFUh/D4fVbc2vKZSvow2nhWwvkIV7HtnAaeEWt0LYfsjTnrUrOGBOc4qJaR+k0zWfHNwW1PIDuHnEay9vi7dQls=) 2026-02-04 01:38:30.570377 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNJv03SE6MEk/lOWSUaylrQo4ZwAQXi5SRQVwFamHGsl0JEIOhlnM3DVz6LMBkfxbsUndD+6YG/XY525HkwtJk8=) 2026-02-04 01:38:30.570383 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILpbrOoCDpnslRx9e9Vbpj4blwukDasJRe68C2ysFW+Q) 2026-02-04 01:38:30.570390 | orchestrator | 2026-02-04 01:38:30.570396 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-04 01:38:30.570409 | orchestrator | Wednesday 04 February 2026 01:38:27 +0000 (0:00:01.152) 0:00:27.888 **** 2026-02-04 01:38:30.570417 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEt0KLyjNVMxKnX1ndvPrSZz+ohDwHb4EfQPUF4X/TOzovZDqtoxIeVJJlGiql5aol2OgWXcT/u1oSkPpKARHM8=) 2026-02-04 01:38:30.570439 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyg3MzOzdlfR4LYv/LAXT7mupuJ3bcaYnmVXysE14iQmvrw8VW4V40VkZ3SV7U+tYDwRWT0WLhEUFqf2SjhiPDWz9nEsorBNtVYkcamkpawjfdnMxfqPXksFl0941MzSzvGhBmT8F9IEMKpC3lciZ6+voE0wWqvwFbZ5WbPq8zwyjEru9XOZSZmZC1gHm+H3G8CDAhM7aM7X9Ws6/kC9KlN+OYxIxMjo8VM6CqqGPbM9wnptXDh0Tt1rkl3OjnAnhK2WzRGNrbISs7IbNZ6LtczLmVUTqQuKH4E+ciE6wuOrurfxk3YvIPiGS3pJd9BpztMPwIWIyZZLTUvo0DbcTEq1D3R51N3mTlxq9GUXn2gozskMa3Kyj1Jad+AghrMedByEtGuAbj1dQtRIwlUyRkhe+LPA0E6Dxdqh7ZwW5M3DkRj4FkFCnTxuceq1dCIGykIYAyde4sT0U13cwsYWUoaJzYpgbNR69LVmxPFi61NKnP4OpmOxhgjqCJ2m7w6is=) 2026-02-04 01:38:30.570446 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO3zOXb90131+SXHsLAM5hPFYhBSAd1y3tDtyM8x9H7E) 2026-02-04 01:38:30.570452 | orchestrator | 2026-02-04 01:38:30.570459 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-04 01:38:30.570495 | orchestrator | Wednesday 04 February 2026 01:38:29 +0000 (0:00:01.148) 0:00:29.036 **** 2026-02-04 01:38:30.570513 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-04 01:38:30.570523 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-04 01:38:30.570533 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-04 01:38:30.570544 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-04 01:38:30.570554 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 01:38:30.570564 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-04 01:38:30.570574 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-04 01:38:30.570585 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:38:30.570596 | orchestrator | 2026-02-04 01:38:30.570624 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-04 01:38:30.570636 | orchestrator | Wednesday 04 February 2026 01:38:29 +0000 (0:00:00.185) 0:00:29.222 **** 2026-02-04 01:38:30.570647 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:38:30.570658 | orchestrator | 2026-02-04 01:38:30.570666 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-04 01:38:30.570672 | orchestrator | Wednesday 04 February 2026 01:38:29 +0000 (0:00:00.054) 0:00:29.277 **** 2026-02-04 01:38:30.570679 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:38:30.570685 | orchestrator | 2026-02-04 01:38:30.570691 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-04 01:38:30.570697 | orchestrator | Wednesday 04 February 2026 01:38:29 +0000 (0:00:00.058) 0:00:29.335 **** 2026-02-04 01:38:30.570703 | orchestrator | changed: [testbed-manager] 2026-02-04 01:38:30.570709 | orchestrator | 2026-02-04 01:38:30.570716 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:38:30.570722 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 01:38:30.570730 | orchestrator | 2026-02-04 01:38:30.570736 | orchestrator | 2026-02-04 01:38:30.570742 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:38:30.570748 | orchestrator | Wednesday 04 February 2026 01:38:30 +0000 (0:00:00.846) 0:00:30.181 **** 2026-02-04 01:38:30.570760 | orchestrator | =============================================================================== 2026-02-04 01:38:30.570766 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.33s 2026-02-04 01:38:30.570773 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.63s 2026-02-04 01:38:30.570780 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.32s 2026-02-04 01:38:30.570786 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-02-04 01:38:30.570792 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-04 01:38:30.570799 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-04 01:38:30.570805 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-04 01:38:30.570811 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.18s 2026-02-04 01:38:30.570817 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-02-04 01:38:30.570823 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2026-02-04 01:38:30.570829 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-04 01:38:30.570835 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-04 01:38:30.570842 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-04 01:38:30.570848 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-04 01:38:30.570861 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-04 01:38:30.570867 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2026-02-04 01:38:30.570873 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.85s 2026-02-04 01:38:30.570880 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-02-04 01:38:30.570887 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.19s 2026-02-04 01:38:30.570893 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-02-04 01:38:30.972730 | orchestrator | + osism apply squid 2026-02-04 01:38:43.287575 | orchestrator | 2026-02-04 01:38:43 | INFO  | Task 0757e728-dcf3-4a09-8b23-754d9b01b1e7 (squid) was prepared for execution. 2026-02-04 01:38:43.287702 | orchestrator | 2026-02-04 01:38:43 | INFO  | It takes a moment until task 0757e728-dcf3-4a09-8b23-754d9b01b1e7 (squid) has been started and output is visible here. 2026-02-04 01:40:43.281121 | orchestrator | 2026-02-04 01:40:43.281225 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-04 01:40:43.281238 | orchestrator | 2026-02-04 01:40:43.281246 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-04 01:40:43.281253 | orchestrator | Wednesday 04 February 2026 01:38:47 +0000 (0:00:00.186) 0:00:00.186 **** 2026-02-04 01:40:43.281260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 01:40:43.281268 | orchestrator | 2026-02-04 01:40:43.281275 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-04 01:40:43.281281 | orchestrator | Wednesday 04 February 2026 01:38:47 +0000 (0:00:00.084) 0:00:00.270 **** 2026-02-04 01:40:43.281288 | orchestrator | ok: [testbed-manager] 2026-02-04 01:40:43.281295 | orchestrator | 2026-02-04 01:40:43.281302 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-04 01:40:43.281308 | orchestrator | Wednesday 04 February 2026 01:38:49 +0000 (0:00:01.670) 0:00:01.940 **** 2026-02-04 01:40:43.281316 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-04 01:40:43.281323 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-04 01:40:43.281331 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-04 01:40:43.281338 | orchestrator | 2026-02-04 01:40:43.281345 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-04 01:40:43.281351 | orchestrator | Wednesday 04 February 2026 01:38:50 +0000 (0:00:01.271) 0:00:03.212 **** 2026-02-04 01:40:43.281359 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-04 01:40:43.281366 | orchestrator | 2026-02-04 01:40:43.281372 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-04 01:40:43.281378 | orchestrator | Wednesday 04 February 2026 01:38:52 +0000 (0:00:01.161) 0:00:04.374 **** 2026-02-04 01:40:43.281384 | orchestrator | ok: [testbed-manager] 2026-02-04 01:40:43.281390 | orchestrator | 2026-02-04 01:40:43.281396 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-04 01:40:43.281402 | orchestrator | Wednesday 04 February 2026 01:38:52 +0000 (0:00:00.391) 0:00:04.765 **** 2026-02-04 01:40:43.281409 | orchestrator | changed: [testbed-manager] 2026-02-04 01:40:43.281415 | orchestrator | 2026-02-04 01:40:43.281422 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-04 01:40:43.281428 | orchestrator | Wednesday 04 February 2026 01:38:53 +0000 (0:00:00.995) 0:00:05.761 **** 2026-02-04 01:40:43.281434 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-04 01:40:43.281445 | orchestrator | ok: [testbed-manager] 2026-02-04 01:40:43.281452 | orchestrator | 2026-02-04 01:40:43.281459 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-04 01:40:43.281545 | orchestrator | Wednesday 04 February 2026 01:39:30 +0000 (0:00:36.661) 0:00:42.422 **** 2026-02-04 01:40:43.281553 | orchestrator | changed: [testbed-manager] 2026-02-04 01:40:43.281560 | orchestrator | 2026-02-04 01:40:43.281566 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-04 01:40:43.281573 | orchestrator | Wednesday 04 February 2026 01:39:42 +0000 (0:00:12.075) 0:00:54.498 **** 2026-02-04 01:40:43.281579 | orchestrator | Pausing for 60 seconds 2026-02-04 01:40:43.281586 | orchestrator | changed: [testbed-manager] 2026-02-04 01:40:43.281592 | orchestrator | 2026-02-04 01:40:43.281598 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-04 01:40:43.281604 | orchestrator | Wednesday 04 February 2026 01:40:42 +0000 (0:01:00.087) 0:01:54.586 **** 2026-02-04 01:40:43.281611 | orchestrator | ok: [testbed-manager] 2026-02-04 01:40:43.281617 | orchestrator | 2026-02-04 01:40:43.281625 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-04 01:40:43.281631 | orchestrator | Wednesday 04 February 2026 01:40:42 +0000 (0:00:00.075) 0:01:54.662 **** 2026-02-04 01:40:43.281638 | orchestrator | changed: [testbed-manager] 2026-02-04 01:40:43.281645 | orchestrator | 2026-02-04 01:40:43.281653 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:40:43.281660 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:40:43.281666 | orchestrator | 2026-02-04 01:40:43.281673 | orchestrator | 2026-02-04 01:40:43.281680 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:40:43.281686 | orchestrator | Wednesday 04 February 2026 01:40:42 +0000 (0:00:00.648) 0:01:55.311 **** 2026-02-04 01:40:43.281693 | orchestrator | =============================================================================== 2026-02-04 01:40:43.281701 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.09s 2026-02-04 01:40:43.281707 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 36.66s 2026-02-04 01:40:43.281713 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.08s 2026-02-04 01:40:43.281735 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.67s 2026-02-04 01:40:43.281743 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.27s 2026-02-04 01:40:43.281749 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.16s 2026-02-04 01:40:43.281755 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.00s 2026-02-04 01:40:43.281761 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2026-02-04 01:40:43.281767 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.39s 2026-02-04 01:40:43.281774 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2026-02-04 01:40:43.281780 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2026-02-04 01:40:43.652231 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-04 01:40:43.652324 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-04 01:40:43.710840 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 01:40:43.710951 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-04 01:40:43.717599 | orchestrator | + set -e 2026-02-04 01:40:43.717676 | orchestrator | + NAMESPACE=kolla/release 2026-02-04 01:40:43.717690 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-04 01:40:43.723010 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-04 01:40:43.798224 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-04 01:40:43.798789 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-04 01:40:56.098566 | orchestrator | 2026-02-04 01:40:56 | INFO  | Task 07f962a7-e735-474b-90d2-838a475bc092 (operator) was prepared for execution. 2026-02-04 01:40:56.098659 | orchestrator | 2026-02-04 01:40:56 | INFO  | It takes a moment until task 07f962a7-e735-474b-90d2-838a475bc092 (operator) has been started and output is visible here. 2026-02-04 01:41:12.701287 | orchestrator | 2026-02-04 01:41:12.701378 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-04 01:41:12.701392 | orchestrator | 2026-02-04 01:41:12.701398 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 01:41:12.701403 | orchestrator | Wednesday 04 February 2026 01:41:00 +0000 (0:00:00.147) 0:00:00.147 **** 2026-02-04 01:41:12.701408 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:41:12.701414 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:41:12.701418 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:41:12.701423 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:41:12.701428 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:41:12.701433 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:41:12.701438 | orchestrator | 2026-02-04 01:41:12.701442 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-04 01:41:12.701447 | orchestrator | Wednesday 04 February 2026 01:41:03 +0000 (0:00:03.252) 0:00:03.400 **** 2026-02-04 01:41:12.701452 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:41:12.701457 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:41:12.701461 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:41:12.701466 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:41:12.701471 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:41:12.701475 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:41:12.701480 | orchestrator | 2026-02-04 01:41:12.701484 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-04 01:41:12.701489 | orchestrator | 2026-02-04 01:41:12.701494 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-04 01:41:12.701498 | orchestrator | Wednesday 04 February 2026 01:41:04 +0000 (0:00:00.845) 0:00:04.245 **** 2026-02-04 01:41:12.701503 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:41:12.701508 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:41:12.701512 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:41:12.701517 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:41:12.701527 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:41:12.701533 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:41:12.701538 | orchestrator | 2026-02-04 01:41:12.701542 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-04 01:41:12.701650 | orchestrator | Wednesday 04 February 2026 01:41:04 +0000 (0:00:00.185) 0:00:04.430 **** 2026-02-04 01:41:12.701659 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:41:12.701663 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:41:12.701668 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:41:12.701672 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:41:12.701677 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:41:12.701682 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:41:12.701686 | orchestrator | 2026-02-04 01:41:12.701691 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-04 01:41:12.701696 | orchestrator | Wednesday 04 February 2026 01:41:05 +0000 (0:00:00.193) 0:00:04.624 **** 2026-02-04 01:41:12.701701 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:41:12.701707 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:41:12.701712 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:41:12.701717 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:41:12.701722 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:41:12.701726 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:41:12.701731 | orchestrator | 2026-02-04 01:41:12.701735 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-04 01:41:12.701740 | orchestrator | Wednesday 04 February 2026 01:41:05 +0000 (0:00:00.659) 0:00:05.283 **** 2026-02-04 01:41:12.701745 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:41:12.701750 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:41:12.701754 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:41:12.701759 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:41:12.701763 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:41:12.701768 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:41:12.701790 | orchestrator | 2026-02-04 01:41:12.701795 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-04 01:41:12.701799 | orchestrator | Wednesday 04 February 2026 01:41:06 +0000 (0:00:00.871) 0:00:06.155 **** 2026-02-04 01:41:12.701804 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-04 01:41:12.701809 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-04 01:41:12.701814 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-04 01:41:12.701818 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-04 01:41:12.701823 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-04 01:41:12.701827 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-04 01:41:12.701832 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-04 01:41:12.701837 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-04 01:41:12.701841 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-04 01:41:12.701846 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-04 01:41:12.701851 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-04 01:41:12.701856 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-04 01:41:12.701862 | orchestrator | 2026-02-04 01:41:12.701867 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-04 01:41:12.701872 | orchestrator | Wednesday 04 February 2026 01:41:07 +0000 (0:00:01.161) 0:00:07.316 **** 2026-02-04 01:41:12.701878 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:41:12.701883 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:41:12.701889 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:41:12.701894 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:41:12.701899 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:41:12.701905 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:41:12.701911 | orchestrator | 2026-02-04 01:41:12.701916 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-04 01:41:12.701922 | orchestrator | Wednesday 04 February 2026 01:41:09 +0000 (0:00:01.284) 0:00:08.600 **** 2026-02-04 01:41:12.701927 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-04 01:41:12.701933 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-04 01:41:12.701938 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-04 01:41:12.701944 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 01:41:12.701961 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 01:41:12.701967 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 01:41:12.701972 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 01:41:12.701978 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 01:41:12.701983 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-04 01:41:12.701988 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-04 01:41:12.701994 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-04 01:41:12.701999 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-04 01:41:12.702005 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-04 01:41:12.702010 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-04 01:41:12.702049 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-04 01:41:12.702054 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-04 01:41:12.702059 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-04 01:41:12.702064 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-04 01:41:12.702068 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-04 01:41:12.702073 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-04 01:41:12.702082 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-04 01:41:12.702087 | orchestrator | 2026-02-04 01:41:12.702092 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-04 01:41:12.702097 | orchestrator | Wednesday 04 February 2026 01:41:10 +0000 (0:00:01.223) 0:00:09.824 **** 2026-02-04 01:41:12.702102 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:41:12.702106 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:41:12.702111 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:41:12.702116 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:41:12.702121 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:41:12.702125 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:41:12.702130 | orchestrator | 2026-02-04 01:41:12.702135 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-04 01:41:12.702140 | orchestrator | Wednesday 04 February 2026 01:41:10 +0000 (0:00:00.176) 0:00:10.000 **** 2026-02-04 01:41:12.702144 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:41:12.702149 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:41:12.702154 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:41:12.702158 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:41:12.702163 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:41:12.702167 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:41:12.702172 | orchestrator | 2026-02-04 01:41:12.702182 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-04 01:41:12.702187 | orchestrator | Wednesday 04 February 2026 01:41:10 +0000 (0:00:00.213) 0:00:10.213 **** 2026-02-04 01:41:12.702191 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:41:12.702196 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:41:12.702201 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:41:12.702205 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:41:12.702210 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:41:12.702214 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:41:12.702219 | orchestrator | 2026-02-04 01:41:12.702224 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-04 01:41:12.702228 | orchestrator | Wednesday 04 February 2026 01:41:11 +0000 (0:00:00.625) 0:00:10.838 **** 2026-02-04 01:41:12.702233 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:41:12.702238 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:41:12.702242 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:41:12.702247 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:41:12.702251 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:41:12.702256 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:41:12.702260 | orchestrator | 2026-02-04 01:41:12.702265 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-04 01:41:12.702270 | orchestrator | Wednesday 04 February 2026 01:41:11 +0000 (0:00:00.198) 0:00:11.037 **** 2026-02-04 01:41:12.702274 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 01:41:12.702284 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:41:12.702289 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 01:41:12.702294 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:41:12.702298 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 01:41:12.702303 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:41:12.702308 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-04 01:41:12.702312 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:41:12.702317 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-04 01:41:12.702322 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 01:41:12.702326 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:41:12.702331 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:41:12.702335 | orchestrator | 2026-02-04 01:41:12.702340 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-04 01:41:12.702345 | orchestrator | Wednesday 04 February 2026 01:41:12 +0000 (0:00:00.735) 0:00:11.773 **** 2026-02-04 01:41:12.702353 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:41:12.702358 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:41:12.702362 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:41:12.702367 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:41:12.702371 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:41:12.702376 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:41:12.702381 | orchestrator | 2026-02-04 01:41:12.702385 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-04 01:41:12.702390 | orchestrator | Wednesday 04 February 2026 01:41:12 +0000 (0:00:00.176) 0:00:11.950 **** 2026-02-04 01:41:12.702395 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:41:12.702399 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:41:12.702404 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:41:12.702408 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:41:12.702417 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:41:14.213708 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:41:14.213820 | orchestrator | 2026-02-04 01:41:14.213840 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-04 01:41:14.213857 | orchestrator | Wednesday 04 February 2026 01:41:12 +0000 (0:00:00.191) 0:00:12.141 **** 2026-02-04 01:41:14.213871 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:41:14.213887 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:41:14.213902 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:41:14.213917 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:41:14.213932 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:41:14.213947 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:41:14.213962 | orchestrator | 2026-02-04 01:41:14.213978 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-04 01:41:14.213993 | orchestrator | Wednesday 04 February 2026 01:41:12 +0000 (0:00:00.192) 0:00:12.334 **** 2026-02-04 01:41:14.214008 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:41:14.214093 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:41:14.214110 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:41:14.214125 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:41:14.214140 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:41:14.214158 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:41:14.214171 | orchestrator | 2026-02-04 01:41:14.214186 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-04 01:41:14.214202 | orchestrator | Wednesday 04 February 2026 01:41:13 +0000 (0:00:00.725) 0:00:13.060 **** 2026-02-04 01:41:14.214216 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:41:14.214230 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:41:14.214246 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:41:14.214261 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:41:14.214275 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:41:14.214290 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:41:14.214304 | orchestrator | 2026-02-04 01:41:14.214319 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:41:14.214360 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:41:14.214378 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:41:14.214394 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:41:14.214411 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:41:14.214426 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:41:14.214465 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 01:41:14.214475 | orchestrator | 2026-02-04 01:41:14.214484 | orchestrator | 2026-02-04 01:41:14.214492 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:41:14.214501 | orchestrator | Wednesday 04 February 2026 01:41:13 +0000 (0:00:00.298) 0:00:13.358 **** 2026-02-04 01:41:14.214510 | orchestrator | =============================================================================== 2026-02-04 01:41:14.214519 | orchestrator | Gathering Facts --------------------------------------------------------- 3.25s 2026-02-04 01:41:14.214528 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.28s 2026-02-04 01:41:14.214536 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.22s 2026-02-04 01:41:14.214546 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-02-04 01:41:14.214580 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.87s 2026-02-04 01:41:14.214590 | orchestrator | Do not require tty for all users ---------------------------------------- 0.85s 2026-02-04 01:41:14.214598 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2026-02-04 01:41:14.214607 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.73s 2026-02-04 01:41:14.214615 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.66s 2026-02-04 01:41:14.214624 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2026-02-04 01:41:14.214632 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.30s 2026-02-04 01:41:14.214641 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-02-04 01:41:14.214650 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-02-04 01:41:14.214661 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.19s 2026-02-04 01:41:14.214675 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.19s 2026-02-04 01:41:14.214693 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-02-04 01:41:14.214714 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-02-04 01:41:14.214729 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-02-04 01:41:14.214744 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-02-04 01:41:14.591962 | orchestrator | + osism apply --environment custom facts 2026-02-04 01:41:16.710442 | orchestrator | 2026-02-04 01:41:16 | INFO  | Trying to run play facts in environment custom 2026-02-04 01:41:26.908707 | orchestrator | 2026-02-04 01:41:26 | INFO  | Task fd696fba-98e9-4fd1-a58f-e886cd2798f2 (facts) was prepared for execution. 2026-02-04 01:41:26.908817 | orchestrator | 2026-02-04 01:41:26 | INFO  | It takes a moment until task fd696fba-98e9-4fd1-a58f-e886cd2798f2 (facts) has been started and output is visible here. 2026-02-04 01:42:09.690445 | orchestrator | 2026-02-04 01:42:09.690537 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-04 01:42:09.690548 | orchestrator | 2026-02-04 01:42:09.690556 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 01:42:09.690564 | orchestrator | Wednesday 04 February 2026 01:41:31 +0000 (0:00:00.101) 0:00:00.101 **** 2026-02-04 01:42:09.690572 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:09.690581 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:09.690589 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:09.690596 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:09.690603 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:09.690610 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:09.690639 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:09.690647 | orchestrator | 2026-02-04 01:42:09.690654 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-04 01:42:09.690662 | orchestrator | Wednesday 04 February 2026 01:41:33 +0000 (0:00:01.407) 0:00:01.509 **** 2026-02-04 01:42:09.690669 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:09.690676 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:09.690683 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:09.690690 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:09.690697 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:09.690705 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:09.690748 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:09.690768 | orchestrator | 2026-02-04 01:42:09.690776 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-04 01:42:09.690783 | orchestrator | 2026-02-04 01:42:09.690790 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 01:42:09.690797 | orchestrator | Wednesday 04 February 2026 01:41:34 +0000 (0:00:01.272) 0:00:02.782 **** 2026-02-04 01:42:09.690804 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:09.690811 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:09.690817 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:09.690824 | orchestrator | 2026-02-04 01:42:09.690831 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 01:42:09.690839 | orchestrator | Wednesday 04 February 2026 01:41:34 +0000 (0:00:00.116) 0:00:02.898 **** 2026-02-04 01:42:09.690846 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:09.690852 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:09.690859 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:09.690866 | orchestrator | 2026-02-04 01:42:09.690873 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 01:42:09.690879 | orchestrator | Wednesday 04 February 2026 01:41:34 +0000 (0:00:00.239) 0:00:03.138 **** 2026-02-04 01:42:09.690886 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:09.690893 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:09.690900 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:09.690906 | orchestrator | 2026-02-04 01:42:09.690914 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 01:42:09.690921 | orchestrator | Wednesday 04 February 2026 01:41:34 +0000 (0:00:00.236) 0:00:03.375 **** 2026-02-04 01:42:09.690929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:42:09.690938 | orchestrator | 2026-02-04 01:42:09.690944 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 01:42:09.690951 | orchestrator | Wednesday 04 February 2026 01:41:35 +0000 (0:00:00.166) 0:00:03.542 **** 2026-02-04 01:42:09.690958 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:09.690965 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:09.690971 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:09.690978 | orchestrator | 2026-02-04 01:42:09.690985 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 01:42:09.690992 | orchestrator | Wednesday 04 February 2026 01:41:35 +0000 (0:00:00.431) 0:00:03.973 **** 2026-02-04 01:42:09.691000 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:42:09.691008 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:42:09.691016 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:42:09.691025 | orchestrator | 2026-02-04 01:42:09.691032 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 01:42:09.691041 | orchestrator | Wednesday 04 February 2026 01:41:35 +0000 (0:00:00.162) 0:00:04.136 **** 2026-02-04 01:42:09.691049 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:09.691057 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:09.691065 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:09.691073 | orchestrator | 2026-02-04 01:42:09.691081 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 01:42:09.691095 | orchestrator | Wednesday 04 February 2026 01:41:36 +0000 (0:00:01.053) 0:00:05.189 **** 2026-02-04 01:42:09.691103 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:09.691111 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:09.691119 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:09.691128 | orchestrator | 2026-02-04 01:42:09.691136 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 01:42:09.691144 | orchestrator | Wednesday 04 February 2026 01:41:37 +0000 (0:00:00.502) 0:00:05.692 **** 2026-02-04 01:42:09.691152 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:09.691160 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:09.691168 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:09.691176 | orchestrator | 2026-02-04 01:42:09.691185 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 01:42:09.691231 | orchestrator | Wednesday 04 February 2026 01:41:38 +0000 (0:00:01.049) 0:00:06.741 **** 2026-02-04 01:42:09.691240 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:09.691249 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:09.691257 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:09.691265 | orchestrator | 2026-02-04 01:42:09.691273 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-04 01:42:09.691280 | orchestrator | Wednesday 04 February 2026 01:41:53 +0000 (0:00:15.184) 0:00:21.925 **** 2026-02-04 01:42:09.691287 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:42:09.691293 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:42:09.691300 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:42:09.691307 | orchestrator | 2026-02-04 01:42:09.691314 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-04 01:42:09.691336 | orchestrator | Wednesday 04 February 2026 01:41:53 +0000 (0:00:00.119) 0:00:22.044 **** 2026-02-04 01:42:09.691344 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:09.691350 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:09.691357 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:09.691364 | orchestrator | 2026-02-04 01:42:09.691370 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-04 01:42:09.691377 | orchestrator | Wednesday 04 February 2026 01:42:00 +0000 (0:00:07.154) 0:00:29.199 **** 2026-02-04 01:42:09.691384 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:09.691391 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:09.691397 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:09.691404 | orchestrator | 2026-02-04 01:42:09.691411 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-04 01:42:09.691418 | orchestrator | Wednesday 04 February 2026 01:42:01 +0000 (0:00:00.465) 0:00:29.664 **** 2026-02-04 01:42:09.691425 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-04 01:42:09.691432 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-04 01:42:09.691439 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-04 01:42:09.691445 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-04 01:42:09.691456 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-04 01:42:09.691467 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-04 01:42:09.691477 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-04 01:42:09.691487 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-04 01:42:09.691496 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-04 01:42:09.691512 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-04 01:42:09.691524 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-04 01:42:09.691535 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-04 01:42:09.691545 | orchestrator | 2026-02-04 01:42:09.691555 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 01:42:09.691574 | orchestrator | Wednesday 04 February 2026 01:42:04 +0000 (0:00:03.490) 0:00:33.155 **** 2026-02-04 01:42:09.691585 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:09.691597 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:09.691608 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:09.691619 | orchestrator | 2026-02-04 01:42:09.691630 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 01:42:09.691641 | orchestrator | 2026-02-04 01:42:09.691652 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 01:42:09.691663 | orchestrator | Wednesday 04 February 2026 01:42:06 +0000 (0:00:01.284) 0:00:34.439 **** 2026-02-04 01:42:09.691674 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:09.691686 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:09.691697 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:09.691727 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:09.691739 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:09.691750 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:09.691783 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:09.691795 | orchestrator | 2026-02-04 01:42:09.691806 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:42:09.691819 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:42:09.691830 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:42:09.691843 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:42:09.691854 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:42:09.691864 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:42:09.691873 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:42:09.691885 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:42:09.691897 | orchestrator | 2026-02-04 01:42:09.691908 | orchestrator | 2026-02-04 01:42:09.691919 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:42:09.691930 | orchestrator | Wednesday 04 February 2026 01:42:09 +0000 (0:00:03.629) 0:00:38.068 **** 2026-02-04 01:42:09.691942 | orchestrator | =============================================================================== 2026-02-04 01:42:09.691953 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.18s 2026-02-04 01:42:09.691964 | orchestrator | Install required packages (Debian) -------------------------------------- 7.15s 2026-02-04 01:42:09.691975 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.63s 2026-02-04 01:42:09.691986 | orchestrator | Copy fact files --------------------------------------------------------- 3.49s 2026-02-04 01:42:09.691997 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-02-04 01:42:09.692008 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.28s 2026-02-04 01:42:09.692027 | orchestrator | Copy fact file ---------------------------------------------------------- 1.27s 2026-02-04 01:42:09.993373 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.05s 2026-02-04 01:42:09.993471 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.05s 2026-02-04 01:42:09.993491 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.50s 2026-02-04 01:42:09.993529 | orchestrator | Create custom facts directory ------------------------------------------- 0.47s 2026-02-04 01:42:09.993538 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2026-02-04 01:42:09.993545 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.24s 2026-02-04 01:42:09.993553 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.24s 2026-02-04 01:42:09.993560 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2026-02-04 01:42:09.993568 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.16s 2026-02-04 01:42:09.993576 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-02-04 01:42:09.993597 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-02-04 01:42:10.407365 | orchestrator | + osism apply bootstrap 2026-02-04 01:42:22.653287 | orchestrator | 2026-02-04 01:42:22 | INFO  | Task df0cc4b7-808e-4a32-af69-202938912fca (bootstrap) was prepared for execution. 2026-02-04 01:42:22.653387 | orchestrator | 2026-02-04 01:42:22 | INFO  | It takes a moment until task df0cc4b7-808e-4a32-af69-202938912fca (bootstrap) has been started and output is visible here. 2026-02-04 01:42:39.788482 | orchestrator | 2026-02-04 01:42:39.788599 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-04 01:42:39.788611 | orchestrator | 2026-02-04 01:42:39.788618 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-04 01:42:39.788626 | orchestrator | Wednesday 04 February 2026 01:42:27 +0000 (0:00:00.160) 0:00:00.160 **** 2026-02-04 01:42:39.788633 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:39.788640 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:39.788647 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:39.788653 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:39.788659 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:39.788666 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:39.788672 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:39.788679 | orchestrator | 2026-02-04 01:42:39.788686 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 01:42:39.788692 | orchestrator | 2026-02-04 01:42:39.788698 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 01:42:39.788705 | orchestrator | Wednesday 04 February 2026 01:42:27 +0000 (0:00:00.291) 0:00:00.451 **** 2026-02-04 01:42:39.788711 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:39.788717 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:39.788724 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:39.788730 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:39.788736 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:39.788742 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:39.788748 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:39.788755 | orchestrator | 2026-02-04 01:42:39.788761 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-04 01:42:39.788767 | orchestrator | 2026-02-04 01:42:39.788774 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 01:42:39.788780 | orchestrator | Wednesday 04 February 2026 01:42:31 +0000 (0:00:04.111) 0:00:04.563 **** 2026-02-04 01:42:39.788834 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-04 01:42:39.788844 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-04 01:42:39.788850 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-04 01:42:39.788857 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-04 01:42:39.788863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 01:42:39.788869 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-04 01:42:39.788876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 01:42:39.788882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-04 01:42:39.788888 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 01:42:39.788921 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-04 01:42:39.788932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 01:42:39.788943 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-04 01:42:39.788954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-04 01:42:39.788964 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-04 01:42:39.788975 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-04 01:42:39.788987 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-04 01:42:39.788996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 01:42:39.789006 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-04 01:42:39.789016 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 01:42:39.789026 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-04 01:42:39.789036 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-04 01:42:39.789046 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-04 01:42:39.789057 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-04 01:42:39.789068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 01:42:39.789078 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:42:39.789088 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 01:42:39.789098 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 01:42:39.789109 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-04 01:42:39.789118 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-04 01:42:39.789128 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-04 01:42:39.789138 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 01:42:39.789149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 01:42:39.789159 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-04 01:42:39.789169 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:42:39.789180 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 01:42:39.789190 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-04 01:42:39.789200 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 01:42:39.789210 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:42:39.789221 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 01:42:39.789231 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-04 01:42:39.789242 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 01:42:39.789253 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-04 01:42:39.789264 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 01:42:39.789275 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 01:42:39.789286 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 01:42:39.789297 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 01:42:39.789330 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 01:42:39.789342 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 01:42:39.789350 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 01:42:39.789356 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:42:39.789363 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 01:42:39.789369 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:42:39.789376 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 01:42:39.789382 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:42:39.789399 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 01:42:39.789419 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:42:39.789426 | orchestrator | 2026-02-04 01:42:39.789433 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-04 01:42:39.789439 | orchestrator | 2026-02-04 01:42:39.789446 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-04 01:42:39.789452 | orchestrator | Wednesday 04 February 2026 01:42:32 +0000 (0:00:00.511) 0:00:05.074 **** 2026-02-04 01:42:39.789459 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:39.789465 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:39.789472 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:39.789478 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:39.789484 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:39.789490 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:39.789497 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:39.789503 | orchestrator | 2026-02-04 01:42:39.789510 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-04 01:42:39.789519 | orchestrator | Wednesday 04 February 2026 01:42:33 +0000 (0:00:01.254) 0:00:06.329 **** 2026-02-04 01:42:39.789530 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:39.789540 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:39.789550 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:39.789560 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:39.789570 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:39.789580 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:39.789590 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:39.789599 | orchestrator | 2026-02-04 01:42:39.789609 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-04 01:42:39.789619 | orchestrator | Wednesday 04 February 2026 01:42:34 +0000 (0:00:01.332) 0:00:07.662 **** 2026-02-04 01:42:39.789629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:42:39.789642 | orchestrator | 2026-02-04 01:42:39.789652 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-04 01:42:39.789662 | orchestrator | Wednesday 04 February 2026 01:42:35 +0000 (0:00:00.335) 0:00:07.998 **** 2026-02-04 01:42:39.789671 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:39.789680 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:39.789689 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:39.789700 | orchestrator | changed: [testbed-manager] 2026-02-04 01:42:39.789710 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:39.789719 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:39.789729 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:39.789740 | orchestrator | 2026-02-04 01:42:39.789750 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-04 01:42:39.789761 | orchestrator | Wednesday 04 February 2026 01:42:37 +0000 (0:00:02.113) 0:00:10.112 **** 2026-02-04 01:42:39.789771 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:42:39.789784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:42:39.789858 | orchestrator | 2026-02-04 01:42:39.789869 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-04 01:42:39.789880 | orchestrator | Wednesday 04 February 2026 01:42:37 +0000 (0:00:00.316) 0:00:10.429 **** 2026-02-04 01:42:39.789890 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:39.789897 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:39.789904 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:39.789910 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:39.789916 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:39.789923 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:39.789945 | orchestrator | 2026-02-04 01:42:39.789964 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-04 01:42:39.789974 | orchestrator | Wednesday 04 February 2026 01:42:38 +0000 (0:00:00.954) 0:00:11.383 **** 2026-02-04 01:42:39.789984 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:42:39.789994 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:39.790003 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:39.790013 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:39.790091 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:39.790102 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:39.790112 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:39.790122 | orchestrator | 2026-02-04 01:42:39.790133 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-04 01:42:39.790140 | orchestrator | Wednesday 04 February 2026 01:42:39 +0000 (0:00:00.593) 0:00:11.976 **** 2026-02-04 01:42:39.790146 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:42:39.790152 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:42:39.790159 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:42:39.790172 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:42:39.790178 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:42:39.790184 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:42:39.790191 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:39.790197 | orchestrator | 2026-02-04 01:42:39.790203 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-04 01:42:39.790211 | orchestrator | Wednesday 04 February 2026 01:42:39 +0000 (0:00:00.463) 0:00:12.439 **** 2026-02-04 01:42:39.790217 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:42:39.790223 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:42:39.790243 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:42:53.111192 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:42:53.111290 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:42:53.111311 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:42:53.111331 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:42:53.111349 | orchestrator | 2026-02-04 01:42:53.111367 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-04 01:42:53.111388 | orchestrator | Wednesday 04 February 2026 01:42:39 +0000 (0:00:00.236) 0:00:12.675 **** 2026-02-04 01:42:53.111402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:42:53.111429 | orchestrator | 2026-02-04 01:42:53.111439 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-04 01:42:53.111450 | orchestrator | Wednesday 04 February 2026 01:42:40 +0000 (0:00:00.334) 0:00:13.010 **** 2026-02-04 01:42:53.111460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:42:53.111470 | orchestrator | 2026-02-04 01:42:53.111480 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-04 01:42:53.111490 | orchestrator | Wednesday 04 February 2026 01:42:40 +0000 (0:00:00.364) 0:00:13.374 **** 2026-02-04 01:42:53.111500 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.111510 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:53.111520 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:53.111530 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.111540 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.111550 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.111559 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:53.111569 | orchestrator | 2026-02-04 01:42:53.111579 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-04 01:42:53.111589 | orchestrator | Wednesday 04 February 2026 01:42:42 +0000 (0:00:02.285) 0:00:15.660 **** 2026-02-04 01:42:53.111625 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:42:53.111636 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:42:53.111645 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:42:53.111655 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:42:53.111664 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:42:53.111674 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:42:53.111684 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:42:53.111693 | orchestrator | 2026-02-04 01:42:53.111703 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-04 01:42:53.111713 | orchestrator | Wednesday 04 February 2026 01:42:43 +0000 (0:00:00.256) 0:00:15.916 **** 2026-02-04 01:42:53.111723 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.111733 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.111746 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.111763 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.111779 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:53.111794 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:53.111811 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:53.111860 | orchestrator | 2026-02-04 01:42:53.111878 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-04 01:42:53.111895 | orchestrator | Wednesday 04 February 2026 01:42:43 +0000 (0:00:00.521) 0:00:16.438 **** 2026-02-04 01:42:53.111911 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:42:53.111924 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:42:53.111936 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:42:53.111947 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:42:53.111959 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:42:53.111971 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:42:53.111983 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:42:53.111994 | orchestrator | 2026-02-04 01:42:53.112004 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-04 01:42:53.112015 | orchestrator | Wednesday 04 February 2026 01:42:44 +0000 (0:00:00.390) 0:00:16.829 **** 2026-02-04 01:42:53.112025 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.112034 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:53.112044 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:53.112054 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:53.112063 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:53.112073 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:53.112082 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:53.112092 | orchestrator | 2026-02-04 01:42:53.112101 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-04 01:42:53.112125 | orchestrator | Wednesday 04 February 2026 01:42:44 +0000 (0:00:00.529) 0:00:17.359 **** 2026-02-04 01:42:53.112136 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.112145 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:53.112165 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:53.112175 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:53.112185 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:53.112194 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:53.112203 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:53.112213 | orchestrator | 2026-02-04 01:42:53.112223 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-04 01:42:53.112232 | orchestrator | Wednesday 04 February 2026 01:42:45 +0000 (0:00:01.046) 0:00:18.406 **** 2026-02-04 01:42:53.112242 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:53.112261 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:53.112271 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:53.112281 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.112293 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.112311 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.112348 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.112377 | orchestrator | 2026-02-04 01:42:53.112395 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-04 01:42:53.112422 | orchestrator | Wednesday 04 February 2026 01:42:46 +0000 (0:00:01.072) 0:00:19.478 **** 2026-02-04 01:42:53.112463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:42:53.112481 | orchestrator | 2026-02-04 01:42:53.112495 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-04 01:42:53.112505 | orchestrator | Wednesday 04 February 2026 01:42:47 +0000 (0:00:00.398) 0:00:19.876 **** 2026-02-04 01:42:53.112515 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:42:53.112524 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:53.112534 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:42:53.112544 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:42:53.112553 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:53.112563 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:42:53.112572 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:53.112582 | orchestrator | 2026-02-04 01:42:53.112592 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-04 01:42:53.112602 | orchestrator | Wednesday 04 February 2026 01:42:48 +0000 (0:00:01.275) 0:00:21.152 **** 2026-02-04 01:42:53.112611 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.112621 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.112630 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.112640 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.112650 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:53.112659 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:53.112693 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:53.112703 | orchestrator | 2026-02-04 01:42:53.112713 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-04 01:42:53.112723 | orchestrator | Wednesday 04 February 2026 01:42:48 +0000 (0:00:00.289) 0:00:21.442 **** 2026-02-04 01:42:53.112733 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.112743 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.112752 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.112762 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.112771 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:53.112781 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:53.112790 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:53.112805 | orchestrator | 2026-02-04 01:42:53.112888 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-04 01:42:53.112909 | orchestrator | Wednesday 04 February 2026 01:42:48 +0000 (0:00:00.247) 0:00:21.689 **** 2026-02-04 01:42:53.112925 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.112939 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.112949 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.112959 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.112968 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:53.112978 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:53.112987 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:53.112997 | orchestrator | 2026-02-04 01:42:53.113006 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-04 01:42:53.113016 | orchestrator | Wednesday 04 February 2026 01:42:49 +0000 (0:00:00.297) 0:00:21.986 **** 2026-02-04 01:42:53.113027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:42:53.113039 | orchestrator | 2026-02-04 01:42:53.113049 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-04 01:42:53.113058 | orchestrator | Wednesday 04 February 2026 01:42:49 +0000 (0:00:00.325) 0:00:22.312 **** 2026-02-04 01:42:53.113068 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.113077 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.113097 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.113107 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.113116 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:53.113126 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:53.113135 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:53.113145 | orchestrator | 2026-02-04 01:42:53.113155 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-04 01:42:53.113164 | orchestrator | Wednesday 04 February 2026 01:42:50 +0000 (0:00:00.530) 0:00:22.842 **** 2026-02-04 01:42:53.113174 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:42:53.113184 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:42:53.113193 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:42:53.113203 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:42:53.113213 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:42:53.113222 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:42:53.113232 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:42:53.113241 | orchestrator | 2026-02-04 01:42:53.113251 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-04 01:42:53.113261 | orchestrator | Wednesday 04 February 2026 01:42:50 +0000 (0:00:00.286) 0:00:23.129 **** 2026-02-04 01:42:53.113270 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.113280 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.113290 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.113299 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.113309 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:42:53.113318 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:42:53.113328 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:42:53.113338 | orchestrator | 2026-02-04 01:42:53.113347 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-04 01:42:53.113357 | orchestrator | Wednesday 04 February 2026 01:42:51 +0000 (0:00:01.071) 0:00:24.201 **** 2026-02-04 01:42:53.113366 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.113376 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.113393 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.113409 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.113425 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:42:53.113440 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:42:53.113456 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:42:53.113471 | orchestrator | 2026-02-04 01:42:53.113489 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-04 01:42:53.113506 | orchestrator | Wednesday 04 February 2026 01:42:51 +0000 (0:00:00.574) 0:00:24.776 **** 2026-02-04 01:42:53.113522 | orchestrator | ok: [testbed-manager] 2026-02-04 01:42:53.113539 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:42:53.113556 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:42:53.113585 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:42:53.113607 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:43:34.408826 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:43:34.408954 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:43:34.408966 | orchestrator | 2026-02-04 01:43:34.408974 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-04 01:43:34.408982 | orchestrator | Wednesday 04 February 2026 01:42:53 +0000 (0:00:01.123) 0:00:25.899 **** 2026-02-04 01:43:34.408989 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.408997 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.409003 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.409010 | orchestrator | changed: [testbed-manager] 2026-02-04 01:43:34.409017 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:43:34.409024 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:43:34.409030 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:43:34.409037 | orchestrator | 2026-02-04 01:43:34.409043 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-04 01:43:34.409050 | orchestrator | Wednesday 04 February 2026 01:43:08 +0000 (0:00:14.999) 0:00:40.899 **** 2026-02-04 01:43:34.409056 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.409082 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.409089 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.409096 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.409102 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.409108 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.409115 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.409121 | orchestrator | 2026-02-04 01:43:34.409128 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-04 01:43:34.409134 | orchestrator | Wednesday 04 February 2026 01:43:08 +0000 (0:00:00.254) 0:00:41.153 **** 2026-02-04 01:43:34.409141 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.409147 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.409153 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.409160 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.409166 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.409172 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.409178 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.409185 | orchestrator | 2026-02-04 01:43:34.409191 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-04 01:43:34.409197 | orchestrator | Wednesday 04 February 2026 01:43:08 +0000 (0:00:00.268) 0:00:41.422 **** 2026-02-04 01:43:34.409204 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.409210 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.409216 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.409223 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.409229 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.409235 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.409242 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.409248 | orchestrator | 2026-02-04 01:43:34.409254 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-04 01:43:34.409261 | orchestrator | Wednesday 04 February 2026 01:43:08 +0000 (0:00:00.270) 0:00:41.692 **** 2026-02-04 01:43:34.409269 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:43:34.409278 | orchestrator | 2026-02-04 01:43:34.409284 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-04 01:43:34.409291 | orchestrator | Wednesday 04 February 2026 01:43:09 +0000 (0:00:00.337) 0:00:42.030 **** 2026-02-04 01:43:34.409297 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.409303 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.409310 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.409316 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.409322 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.409328 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.409335 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.409341 | orchestrator | 2026-02-04 01:43:34.409347 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-04 01:43:34.409354 | orchestrator | Wednesday 04 February 2026 01:43:11 +0000 (0:00:02.529) 0:00:44.560 **** 2026-02-04 01:43:34.409360 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:43:34.409366 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:43:34.409373 | orchestrator | changed: [testbed-manager] 2026-02-04 01:43:34.409379 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:43:34.409386 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:43:34.409392 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:43:34.409398 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:43:34.409404 | orchestrator | 2026-02-04 01:43:34.409411 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-04 01:43:34.409417 | orchestrator | Wednesday 04 February 2026 01:43:12 +0000 (0:00:01.066) 0:00:45.626 **** 2026-02-04 01:43:34.409423 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.409430 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.409436 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.409447 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.409454 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.409460 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.409467 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.409473 | orchestrator | 2026-02-04 01:43:34.409479 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-04 01:43:34.409486 | orchestrator | Wednesday 04 February 2026 01:43:13 +0000 (0:00:00.852) 0:00:46.478 **** 2026-02-04 01:43:34.409493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:43:34.409501 | orchestrator | 2026-02-04 01:43:34.409518 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-04 01:43:34.409526 | orchestrator | Wednesday 04 February 2026 01:43:14 +0000 (0:00:00.324) 0:00:46.802 **** 2026-02-04 01:43:34.409532 | orchestrator | changed: [testbed-manager] 2026-02-04 01:43:34.409539 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:43:34.409545 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:43:34.409551 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:43:34.409557 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:43:34.409564 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:43:34.409570 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:43:34.409576 | orchestrator | 2026-02-04 01:43:34.409595 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-04 01:43:34.409602 | orchestrator | Wednesday 04 February 2026 01:43:15 +0000 (0:00:01.001) 0:00:47.804 **** 2026-02-04 01:43:34.409608 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:43:34.409615 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:43:34.409621 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:43:34.409627 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:43:34.409633 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:43:34.409640 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:43:34.409646 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:43:34.409652 | orchestrator | 2026-02-04 01:43:34.409658 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-04 01:43:34.409667 | orchestrator | Wednesday 04 February 2026 01:43:15 +0000 (0:00:00.263) 0:00:48.068 **** 2026-02-04 01:43:34.409677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:43:34.409687 | orchestrator | 2026-02-04 01:43:34.409694 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-04 01:43:34.409700 | orchestrator | Wednesday 04 February 2026 01:43:15 +0000 (0:00:00.349) 0:00:48.418 **** 2026-02-04 01:43:34.409707 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.409713 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.409719 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.409725 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.409731 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.409737 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.409744 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.409750 | orchestrator | 2026-02-04 01:43:34.409756 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-04 01:43:34.409762 | orchestrator | Wednesday 04 February 2026 01:43:17 +0000 (0:00:01.544) 0:00:49.962 **** 2026-02-04 01:43:34.409769 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:43:34.409776 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:43:34.409786 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:43:34.409793 | orchestrator | changed: [testbed-manager] 2026-02-04 01:43:34.409799 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:43:34.409805 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:43:34.409811 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:43:34.409830 | orchestrator | 2026-02-04 01:43:34.409836 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-04 01:43:34.409843 | orchestrator | Wednesday 04 February 2026 01:43:18 +0000 (0:00:01.033) 0:00:50.996 **** 2026-02-04 01:43:34.409849 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:43:34.409855 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:43:34.409861 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:43:34.409868 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:43:34.409874 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:43:34.409880 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:43:34.409886 | orchestrator | changed: [testbed-manager] 2026-02-04 01:43:34.409892 | orchestrator | 2026-02-04 01:43:34.409899 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-04 01:43:34.409905 | orchestrator | Wednesday 04 February 2026 01:43:31 +0000 (0:00:13.734) 0:01:04.731 **** 2026-02-04 01:43:34.409911 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.409940 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.409951 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.409960 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.409967 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.409973 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.409979 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.409985 | orchestrator | 2026-02-04 01:43:34.409991 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-04 01:43:34.409998 | orchestrator | Wednesday 04 February 2026 01:43:32 +0000 (0:00:00.732) 0:01:05.463 **** 2026-02-04 01:43:34.410004 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.410010 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.410063 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.410069 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.410076 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.410082 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.410088 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.410094 | orchestrator | 2026-02-04 01:43:34.410101 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-04 01:43:34.410107 | orchestrator | Wednesday 04 February 2026 01:43:33 +0000 (0:00:00.845) 0:01:06.309 **** 2026-02-04 01:43:34.410113 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.410119 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.410125 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.410131 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.410138 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.410144 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.410150 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.410156 | orchestrator | 2026-02-04 01:43:34.410162 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-04 01:43:34.410169 | orchestrator | Wednesday 04 February 2026 01:43:33 +0000 (0:00:00.270) 0:01:06.579 **** 2026-02-04 01:43:34.410175 | orchestrator | ok: [testbed-manager] 2026-02-04 01:43:34.410181 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:43:34.410188 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:43:34.410194 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:43:34.410200 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:43:34.410206 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:43:34.410212 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:43:34.410218 | orchestrator | 2026-02-04 01:43:34.410229 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-04 01:43:34.410236 | orchestrator | Wednesday 04 February 2026 01:43:34 +0000 (0:00:00.290) 0:01:06.870 **** 2026-02-04 01:43:34.410242 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:43:34.410249 | orchestrator | 2026-02-04 01:43:34.410261 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-04 01:45:51.496678 | orchestrator | Wednesday 04 February 2026 01:43:34 +0000 (0:00:00.328) 0:01:07.199 **** 2026-02-04 01:45:51.496795 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:45:51.496813 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:45:51.496825 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:45:51.496836 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:45:51.496848 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:45:51.496859 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:45:51.496871 | orchestrator | ok: [testbed-manager] 2026-02-04 01:45:51.496883 | orchestrator | 2026-02-04 01:45:51.496895 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-04 01:45:51.496907 | orchestrator | Wednesday 04 February 2026 01:43:35 +0000 (0:00:01.539) 0:01:08.738 **** 2026-02-04 01:45:51.496918 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:45:51.496931 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:45:51.496942 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:45:51.496953 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:45:51.496964 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:45:51.496975 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:45:51.496985 | orchestrator | changed: [testbed-manager] 2026-02-04 01:45:51.496996 | orchestrator | 2026-02-04 01:45:51.497008 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-04 01:45:51.497020 | orchestrator | Wednesday 04 February 2026 01:43:36 +0000 (0:00:00.542) 0:01:09.280 **** 2026-02-04 01:45:51.497031 | orchestrator | ok: [testbed-manager] 2026-02-04 01:45:51.497042 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:45:51.497052 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:45:51.497064 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:45:51.497074 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:45:51.497086 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:45:51.497096 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:45:51.497107 | orchestrator | 2026-02-04 01:45:51.497119 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-04 01:45:51.497130 | orchestrator | Wednesday 04 February 2026 01:43:36 +0000 (0:00:00.284) 0:01:09.565 **** 2026-02-04 01:45:51.497142 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:45:51.497153 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:45:51.497163 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:45:51.497174 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:45:51.497185 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:45:51.497229 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:45:51.497244 | orchestrator | ok: [testbed-manager] 2026-02-04 01:45:51.497257 | orchestrator | 2026-02-04 01:45:51.497271 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-04 01:45:51.497284 | orchestrator | Wednesday 04 February 2026 01:43:37 +0000 (0:00:01.085) 0:01:10.650 **** 2026-02-04 01:45:51.497296 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:45:51.497309 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:45:51.497321 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:45:51.497334 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:45:51.497346 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:45:51.497359 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:45:51.497373 | orchestrator | changed: [testbed-manager] 2026-02-04 01:45:51.497384 | orchestrator | 2026-02-04 01:45:51.497399 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-04 01:45:51.497411 | orchestrator | Wednesday 04 February 2026 01:43:39 +0000 (0:00:01.632) 0:01:12.283 **** 2026-02-04 01:45:51.497423 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:45:51.497434 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:45:51.497445 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:45:51.497456 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:45:51.497468 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:45:51.497479 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:45:51.497490 | orchestrator | ok: [testbed-manager] 2026-02-04 01:45:51.497501 | orchestrator | 2026-02-04 01:45:51.497512 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-04 01:45:51.497551 | orchestrator | Wednesday 04 February 2026 01:43:41 +0000 (0:00:02.375) 0:01:14.658 **** 2026-02-04 01:45:51.497563 | orchestrator | ok: [testbed-manager] 2026-02-04 01:45:51.497574 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:45:51.497585 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:45:51.497596 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:45:51.497606 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:45:51.497617 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:45:51.497644 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:45:51.497665 | orchestrator | 2026-02-04 01:45:51.497677 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-04 01:45:51.497688 | orchestrator | Wednesday 04 February 2026 01:44:21 +0000 (0:00:39.406) 0:01:54.064 **** 2026-02-04 01:45:51.497699 | orchestrator | changed: [testbed-manager] 2026-02-04 01:45:51.497710 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:45:51.497721 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:45:51.497732 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:45:51.497743 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:45:51.497754 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:45:51.497765 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:45:51.497776 | orchestrator | 2026-02-04 01:45:51.497787 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-04 01:45:51.497798 | orchestrator | Wednesday 04 February 2026 01:45:34 +0000 (0:01:13.548) 0:03:07.613 **** 2026-02-04 01:45:51.497809 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:45:51.497820 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:45:51.497831 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:45:51.497842 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:45:51.497853 | orchestrator | ok: [testbed-manager] 2026-02-04 01:45:51.497864 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:45:51.497875 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:45:51.497886 | orchestrator | 2026-02-04 01:45:51.497898 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-04 01:45:51.497909 | orchestrator | Wednesday 04 February 2026 01:45:36 +0000 (0:00:01.654) 0:03:09.267 **** 2026-02-04 01:45:51.497920 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:45:51.497932 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:45:51.497942 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:45:51.497953 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:45:51.497964 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:45:51.497975 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:45:51.497986 | orchestrator | changed: [testbed-manager] 2026-02-04 01:45:51.497997 | orchestrator | 2026-02-04 01:45:51.498009 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-04 01:45:51.498087 | orchestrator | Wednesday 04 February 2026 01:45:50 +0000 (0:00:13.772) 0:03:23.040 **** 2026-02-04 01:45:51.498152 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-04 01:45:51.498233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-04 01:45:51.498263 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-04 01:45:51.498277 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-04 01:45:51.498288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-04 01:45:51.498300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-04 01:45:51.498311 | orchestrator | 2026-02-04 01:45:51.498322 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-04 01:45:51.498333 | orchestrator | Wednesday 04 February 2026 01:45:50 +0000 (0:00:00.443) 0:03:23.483 **** 2026-02-04 01:45:51.498344 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 01:45:51.498356 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 01:45:51.498367 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:45:51.498378 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:45:51.498389 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 01:45:51.498400 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-04 01:45:51.498411 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:45:51.498422 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:45:51.498433 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 01:45:51.498444 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 01:45:51.498455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 01:45:51.498466 | orchestrator | 2026-02-04 01:45:51.498476 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-04 01:45:51.498487 | orchestrator | Wednesday 04 February 2026 01:45:51 +0000 (0:00:00.704) 0:03:24.188 **** 2026-02-04 01:45:51.498503 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 01:45:51.498516 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 01:45:51.498527 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 01:45:51.498538 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 01:45:51.498549 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 01:45:51.498568 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 01:45:55.856043 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 01:45:55.856116 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 01:45:55.856139 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 01:45:55.856144 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 01:45:55.856149 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 01:45:55.856153 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 01:45:55.856157 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 01:45:55.856161 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 01:45:55.856166 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:45:55.856171 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 01:45:55.856174 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 01:45:55.856178 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 01:45:55.856183 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 01:45:55.856186 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 01:45:55.856190 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 01:45:55.856194 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:45:55.856198 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 01:45:55.856234 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 01:45:55.856239 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 01:45:55.856243 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 01:45:55.856249 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 01:45:55.856255 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 01:45:55.856261 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-04 01:45:55.856267 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 01:45:55.856273 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-04 01:45:55.856279 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 01:45:55.856286 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-04 01:45:55.856292 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 01:45:55.856298 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-04 01:45:55.856305 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-04 01:45:55.856309 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 01:45:55.856313 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-04 01:45:55.856317 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-04 01:45:55.856321 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-04 01:45:55.856325 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-04 01:45:55.856333 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-04 01:45:55.856337 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:45:55.856341 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:45:55.856355 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 01:45:55.856359 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 01:45:55.856363 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-04 01:45:55.856367 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 01:45:55.856371 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 01:45:55.856386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-04 01:45:55.856390 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 01:45:55.856393 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 01:45:55.856397 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-04 01:45:55.856401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 01:45:55.856405 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 01:45:55.856408 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-04 01:45:55.856412 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 01:45:55.856416 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 01:45:55.856420 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-04 01:45:55.856423 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 01:45:55.856427 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 01:45:55.856431 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-04 01:45:55.856435 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 01:45:55.856438 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 01:45:55.856442 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-04 01:45:55.856446 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 01:45:55.856450 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 01:45:55.856453 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-04 01:45:55.856457 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 01:45:55.856461 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 01:45:55.856465 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-04 01:45:55.856468 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 01:45:55.856472 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 01:45:55.856476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-04 01:45:55.856485 | orchestrator | 2026-02-04 01:45:55.856489 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-04 01:45:55.856493 | orchestrator | Wednesday 04 February 2026 01:45:54 +0000 (0:00:03.403) 0:03:27.591 **** 2026-02-04 01:45:55.856497 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 01:45:55.856501 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 01:45:55.856505 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 01:45:55.856508 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 01:45:55.856512 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 01:45:55.856516 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 01:45:55.856520 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-04 01:45:55.856523 | orchestrator | 2026-02-04 01:45:55.856527 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-04 01:45:55.856531 | orchestrator | Wednesday 04 February 2026 01:45:55 +0000 (0:00:00.557) 0:03:28.149 **** 2026-02-04 01:45:55.856535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 01:45:55.856539 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:45:55.856542 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 01:45:55.856546 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:45:55.856553 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 01:45:55.856557 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:45:55.856561 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 01:45:55.856564 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:45:55.856568 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 01:45:55.856572 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 01:45:55.856579 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 01:46:08.962101 | orchestrator | 2026-02-04 01:46:08.962216 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-04 01:46:08.962349 | orchestrator | Wednesday 04 February 2026 01:45:55 +0000 (0:00:00.493) 0:03:28.643 **** 2026-02-04 01:46:08.962365 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 01:46:08.962378 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 01:46:08.962390 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:46:08.962403 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:46:08.962414 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 01:46:08.962425 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:46:08.962437 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-04 01:46:08.962448 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:46:08.962459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 01:46:08.962471 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 01:46:08.962482 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-04 01:46:08.962493 | orchestrator | 2026-02-04 01:46:08.962504 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-04 01:46:08.962548 | orchestrator | Wednesday 04 February 2026 01:45:56 +0000 (0:00:00.543) 0:03:29.187 **** 2026-02-04 01:46:08.962573 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 01:46:08.962601 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:46:08.962620 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 01:46:08.962639 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:46:08.962657 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 01:46:08.962675 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:46:08.962693 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-04 01:46:08.962711 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:46:08.962730 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 01:46:08.962750 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 01:46:08.962771 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-04 01:46:08.962790 | orchestrator | 2026-02-04 01:46:08.962810 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-04 01:46:08.962830 | orchestrator | Wednesday 04 February 2026 01:45:56 +0000 (0:00:00.547) 0:03:29.735 **** 2026-02-04 01:46:08.962849 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:46:08.962868 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:46:08.962887 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:46:08.962906 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:46:08.962923 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:46:08.962942 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:46:08.962961 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:46:08.962980 | orchestrator | 2026-02-04 01:46:08.962998 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-04 01:46:08.963018 | orchestrator | Wednesday 04 February 2026 01:45:57 +0000 (0:00:00.356) 0:03:30.091 **** 2026-02-04 01:46:08.963038 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:46:08.963058 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:46:08.963077 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:46:08.963096 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:46:08.963114 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:46:08.963133 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:46:08.963151 | orchestrator | ok: [testbed-manager] 2026-02-04 01:46:08.963170 | orchestrator | 2026-02-04 01:46:08.963190 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-04 01:46:08.963209 | orchestrator | Wednesday 04 February 2026 01:46:03 +0000 (0:00:05.868) 0:03:35.959 **** 2026-02-04 01:46:08.963260 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-04 01:46:08.963282 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:46:08.963301 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-04 01:46:08.963319 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:46:08.963337 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-04 01:46:08.963357 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:46:08.963376 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-04 01:46:08.963394 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:46:08.963414 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-04 01:46:08.963434 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-04 01:46:08.963479 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:46:08.963500 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:46:08.963520 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-04 01:46:08.963539 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:46:08.963575 | orchestrator | 2026-02-04 01:46:08.963593 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-04 01:46:08.963612 | orchestrator | Wednesday 04 February 2026 01:46:03 +0000 (0:00:00.311) 0:03:36.271 **** 2026-02-04 01:46:08.963630 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-04 01:46:08.963648 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-04 01:46:08.963666 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-04 01:46:08.963713 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-04 01:46:08.963735 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-04 01:46:08.963754 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-04 01:46:08.963772 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-04 01:46:08.963792 | orchestrator | 2026-02-04 01:46:08.963810 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-04 01:46:08.963829 | orchestrator | Wednesday 04 February 2026 01:46:04 +0000 (0:00:01.201) 0:03:37.473 **** 2026-02-04 01:46:08.963850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:46:08.963872 | orchestrator | 2026-02-04 01:46:08.963889 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-04 01:46:08.963901 | orchestrator | Wednesday 04 February 2026 01:46:05 +0000 (0:00:00.441) 0:03:37.914 **** 2026-02-04 01:46:08.963912 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:46:08.963923 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:46:08.963934 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:46:08.963945 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:46:08.963956 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:46:08.963967 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:46:08.963985 | orchestrator | ok: [testbed-manager] 2026-02-04 01:46:08.964002 | orchestrator | 2026-02-04 01:46:08.964021 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-04 01:46:08.964032 | orchestrator | Wednesday 04 February 2026 01:46:06 +0000 (0:00:01.124) 0:03:39.039 **** 2026-02-04 01:46:08.964044 | orchestrator | ok: [testbed-manager] 2026-02-04 01:46:08.964055 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:46:08.964087 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:46:08.964098 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:46:08.964109 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:46:08.964120 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:46:08.964131 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:46:08.964142 | orchestrator | 2026-02-04 01:46:08.964153 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-04 01:46:08.964164 | orchestrator | Wednesday 04 February 2026 01:46:06 +0000 (0:00:00.585) 0:03:39.624 **** 2026-02-04 01:46:08.964175 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:46:08.964186 | orchestrator | changed: [testbed-manager] 2026-02-04 01:46:08.964197 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:46:08.964208 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:46:08.964219 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:46:08.964257 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:46:08.964276 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:46:08.964297 | orchestrator | 2026-02-04 01:46:08.964316 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-04 01:46:08.964332 | orchestrator | Wednesday 04 February 2026 01:46:07 +0000 (0:00:00.572) 0:03:40.197 **** 2026-02-04 01:46:08.964344 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:46:08.964355 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:46:08.964366 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:46:08.964377 | orchestrator | ok: [testbed-manager] 2026-02-04 01:46:08.964388 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:46:08.964399 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:46:08.964410 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:46:08.964421 | orchestrator | 2026-02-04 01:46:08.964432 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-04 01:46:08.964459 | orchestrator | Wednesday 04 February 2026 01:46:07 +0000 (0:00:00.597) 0:03:40.795 **** 2026-02-04 01:46:08.964476 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770168147.1896758, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:08.964492 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770168136.727582, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:08.964512 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770168124.6006796, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:08.964537 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770168139.883882, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558456 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770168127.0012722, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558559 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770168115.6417122, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558576 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770168127.157442, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558615 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558628 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558654 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558666 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558708 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558721 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558732 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 01:46:13.558752 | orchestrator | 2026-02-04 01:46:13.558766 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-04 01:46:13.558778 | orchestrator | Wednesday 04 February 2026 01:46:08 +0000 (0:00:00.952) 0:03:41.747 **** 2026-02-04 01:46:13.558790 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:46:13.558802 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:46:13.558813 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:46:13.558824 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:46:13.558835 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:46:13.558847 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:46:13.558858 | orchestrator | changed: [testbed-manager] 2026-02-04 01:46:13.558868 | orchestrator | 2026-02-04 01:46:13.558880 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-04 01:46:13.558891 | orchestrator | Wednesday 04 February 2026 01:46:09 +0000 (0:00:00.965) 0:03:42.712 **** 2026-02-04 01:46:13.558902 | orchestrator | changed: [testbed-manager] 2026-02-04 01:46:13.558912 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:46:13.558923 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:46:13.558934 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:46:13.558945 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:46:13.558956 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:46:13.558967 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:46:13.558980 | orchestrator | 2026-02-04 01:46:13.558993 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-04 01:46:13.559006 | orchestrator | Wednesday 04 February 2026 01:46:10 +0000 (0:00:01.074) 0:03:43.787 **** 2026-02-04 01:46:13.559018 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:46:13.559031 | orchestrator | changed: [testbed-manager] 2026-02-04 01:46:13.559044 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:46:13.559057 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:46:13.559070 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:46:13.559082 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:46:13.559095 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:46:13.559107 | orchestrator | 2026-02-04 01:46:13.559120 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-04 01:46:13.559133 | orchestrator | Wednesday 04 February 2026 01:46:12 +0000 (0:00:01.031) 0:03:44.819 **** 2026-02-04 01:46:13.559145 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:46:13.559157 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:46:13.559176 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:46:13.559188 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:46:13.559201 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:46:13.559213 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:46:13.559226 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:46:13.559267 | orchestrator | 2026-02-04 01:46:13.559282 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-04 01:46:13.559295 | orchestrator | Wednesday 04 February 2026 01:46:12 +0000 (0:00:00.386) 0:03:45.205 **** 2026-02-04 01:46:13.559308 | orchestrator | ok: [testbed-manager] 2026-02-04 01:46:13.559321 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:46:13.559333 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:46:13.559346 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:46:13.559359 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:46:13.559372 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:46:13.559385 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:46:13.559398 | orchestrator | 2026-02-04 01:46:13.559410 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-04 01:46:13.559421 | orchestrator | Wednesday 04 February 2026 01:46:13 +0000 (0:00:00.710) 0:03:45.916 **** 2026-02-04 01:46:13.559433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:46:13.559463 | orchestrator | 2026-02-04 01:46:13.559475 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-04 01:46:13.559494 | orchestrator | Wednesday 04 February 2026 01:46:13 +0000 (0:00:00.430) 0:03:46.347 **** 2026-02-04 01:47:29.261882 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:29.262003 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:47:29.262095 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:47:29.262112 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:47:29.262126 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:47:29.262138 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:47:29.262150 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:47:29.262163 | orchestrator | 2026-02-04 01:47:29.262176 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-04 01:47:29.262192 | orchestrator | Wednesday 04 February 2026 01:46:20 +0000 (0:00:06.688) 0:03:53.036 **** 2026-02-04 01:47:29.262204 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:29.262216 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:29.262229 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:29.262241 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:29.262254 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:29.262267 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:29.262280 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:29.262293 | orchestrator | 2026-02-04 01:47:29.262306 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-04 01:47:29.262319 | orchestrator | Wednesday 04 February 2026 01:46:21 +0000 (0:00:01.151) 0:03:54.187 **** 2026-02-04 01:47:29.262333 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:29.262346 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:29.262360 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:29.262407 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:29.262421 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:29.262434 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:29.262448 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:29.262460 | orchestrator | 2026-02-04 01:47:29.262474 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-04 01:47:29.262488 | orchestrator | Wednesday 04 February 2026 01:46:22 +0000 (0:00:01.072) 0:03:55.259 **** 2026-02-04 01:47:29.262502 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:29.262515 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:29.262528 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:29.262539 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:29.262554 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:29.262567 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:29.262580 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:29.262593 | orchestrator | 2026-02-04 01:47:29.262606 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-04 01:47:29.262622 | orchestrator | Wednesday 04 February 2026 01:46:22 +0000 (0:00:00.310) 0:03:55.570 **** 2026-02-04 01:47:29.262639 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:29.262653 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:29.262666 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:29.262680 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:29.262694 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:29.262708 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:29.262721 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:29.262735 | orchestrator | 2026-02-04 01:47:29.262748 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-04 01:47:29.262762 | orchestrator | Wednesday 04 February 2026 01:46:23 +0000 (0:00:00.336) 0:03:55.906 **** 2026-02-04 01:47:29.262775 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:29.262788 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:29.262800 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:29.262846 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:29.262861 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:29.262874 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:29.262887 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:29.262900 | orchestrator | 2026-02-04 01:47:29.262912 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-04 01:47:29.262924 | orchestrator | Wednesday 04 February 2026 01:46:23 +0000 (0:00:00.313) 0:03:56.219 **** 2026-02-04 01:47:29.262935 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:29.262947 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:29.262959 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:29.262971 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:29.262982 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:29.262995 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:29.263008 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:29.263020 | orchestrator | 2026-02-04 01:47:29.263032 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-04 01:47:29.263045 | orchestrator | Wednesday 04 February 2026 01:46:28 +0000 (0:00:05.231) 0:04:01.451 **** 2026-02-04 01:47:29.263061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:47:29.263077 | orchestrator | 2026-02-04 01:47:29.263090 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-04 01:47:29.263104 | orchestrator | Wednesday 04 February 2026 01:46:29 +0000 (0:00:00.472) 0:04:01.923 **** 2026-02-04 01:47:29.263117 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-04 01:47:29.263130 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-04 01:47:29.263143 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-04 01:47:29.263154 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-04 01:47:29.263165 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:47:29.263177 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:47:29.263210 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-04 01:47:29.263223 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-04 01:47:29.263234 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-04 01:47:29.263247 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-04 01:47:29.263259 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:47:29.263273 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-04 01:47:29.263286 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-04 01:47:29.263301 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:47:29.263315 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:47:29.263328 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-04 01:47:29.263368 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-04 01:47:29.263466 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:47:29.263480 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-04 01:47:29.263493 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-04 01:47:29.263505 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:47:29.263517 | orchestrator | 2026-02-04 01:47:29.263529 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-04 01:47:29.263543 | orchestrator | Wednesday 04 February 2026 01:46:29 +0000 (0:00:00.358) 0:04:02.281 **** 2026-02-04 01:47:29.263556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:47:29.263567 | orchestrator | 2026-02-04 01:47:29.263576 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-04 01:47:29.263601 | orchestrator | Wednesday 04 February 2026 01:46:29 +0000 (0:00:00.448) 0:04:02.730 **** 2026-02-04 01:47:29.263611 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-04 01:47:29.263621 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:47:29.263630 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-04 01:47:29.263641 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-04 01:47:29.263651 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:47:29.263661 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:47:29.263671 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-04 01:47:29.263682 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-04 01:47:29.263692 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:47:29.263703 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-04 01:47:29.263714 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:47:29.263726 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:47:29.263737 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-04 01:47:29.263748 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:47:29.263759 | orchestrator | 2026-02-04 01:47:29.263771 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-04 01:47:29.263782 | orchestrator | Wednesday 04 February 2026 01:46:30 +0000 (0:00:00.343) 0:04:03.074 **** 2026-02-04 01:47:29.263795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:47:29.263806 | orchestrator | 2026-02-04 01:47:29.263817 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-04 01:47:29.263829 | orchestrator | Wednesday 04 February 2026 01:46:30 +0000 (0:00:00.488) 0:04:03.562 **** 2026-02-04 01:47:29.263839 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:47:29.263850 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:47:29.263860 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:47:29.263872 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:47:29.263883 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:47:29.263894 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:47:29.263905 | orchestrator | changed: [testbed-manager] 2026-02-04 01:47:29.263914 | orchestrator | 2026-02-04 01:47:29.263921 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-04 01:47:29.263928 | orchestrator | Wednesday 04 February 2026 01:47:06 +0000 (0:00:35.381) 0:04:38.943 **** 2026-02-04 01:47:29.263935 | orchestrator | changed: [testbed-manager] 2026-02-04 01:47:29.263942 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:47:29.263949 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:47:29.263955 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:47:29.263962 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:47:29.263969 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:47:29.263975 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:47:29.263982 | orchestrator | 2026-02-04 01:47:29.263989 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-04 01:47:29.264004 | orchestrator | Wednesday 04 February 2026 01:47:13 +0000 (0:00:07.617) 0:04:46.561 **** 2026-02-04 01:47:29.264011 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:47:29.264018 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:47:29.264024 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:47:29.264031 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:47:29.264038 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:47:29.264044 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:47:29.264051 | orchestrator | changed: [testbed-manager] 2026-02-04 01:47:29.264058 | orchestrator | 2026-02-04 01:47:29.264065 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-04 01:47:29.264079 | orchestrator | Wednesday 04 February 2026 01:47:21 +0000 (0:00:07.591) 0:04:54.152 **** 2026-02-04 01:47:29.264086 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:29.264092 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:29.264099 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:29.264106 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:29.264112 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:29.264119 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:29.264126 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:29.264132 | orchestrator | 2026-02-04 01:47:29.264139 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-04 01:47:29.264146 | orchestrator | Wednesday 04 February 2026 01:47:22 +0000 (0:00:01.642) 0:04:55.795 **** 2026-02-04 01:47:29.264153 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:47:29.264160 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:47:29.264166 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:47:29.264173 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:47:29.264180 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:47:29.264186 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:47:29.264193 | orchestrator | changed: [testbed-manager] 2026-02-04 01:47:29.264200 | orchestrator | 2026-02-04 01:47:29.264217 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-04 01:47:40.949629 | orchestrator | Wednesday 04 February 2026 01:47:29 +0000 (0:00:06.250) 0:05:02.045 **** 2026-02-04 01:47:40.949721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:47:40.949733 | orchestrator | 2026-02-04 01:47:40.949740 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-04 01:47:40.949747 | orchestrator | Wednesday 04 February 2026 01:47:29 +0000 (0:00:00.499) 0:05:02.544 **** 2026-02-04 01:47:40.949753 | orchestrator | changed: [testbed-manager] 2026-02-04 01:47:40.949761 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:47:40.949767 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:47:40.949774 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:47:40.949780 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:47:40.949786 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:47:40.949793 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:47:40.949799 | orchestrator | 2026-02-04 01:47:40.949805 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-04 01:47:40.949812 | orchestrator | Wednesday 04 February 2026 01:47:30 +0000 (0:00:00.758) 0:05:03.302 **** 2026-02-04 01:47:40.949819 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:40.949827 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:40.949834 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:40.949840 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:40.949846 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:40.949854 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:40.949861 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:40.949867 | orchestrator | 2026-02-04 01:47:40.949875 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-04 01:47:40.949882 | orchestrator | Wednesday 04 February 2026 01:47:32 +0000 (0:00:01.664) 0:05:04.967 **** 2026-02-04 01:47:40.949889 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:47:40.949896 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:47:40.949903 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:47:40.949910 | orchestrator | changed: [testbed-manager] 2026-02-04 01:47:40.949917 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:47:40.949925 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:47:40.949932 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:47:40.949939 | orchestrator | 2026-02-04 01:47:40.949946 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-04 01:47:40.949953 | orchestrator | Wednesday 04 February 2026 01:47:33 +0000 (0:00:00.857) 0:05:05.825 **** 2026-02-04 01:47:40.949981 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:47:40.949988 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:47:40.949995 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:47:40.950001 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:47:40.950008 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:47:40.950066 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:47:40.950073 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:47:40.950081 | orchestrator | 2026-02-04 01:47:40.950088 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-04 01:47:40.950096 | orchestrator | Wednesday 04 February 2026 01:47:33 +0000 (0:00:00.403) 0:05:06.229 **** 2026-02-04 01:47:40.950103 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:47:40.950109 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:47:40.950115 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:47:40.950122 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:47:40.950129 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:47:40.950137 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:47:40.950143 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:47:40.950150 | orchestrator | 2026-02-04 01:47:40.950157 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-04 01:47:40.950163 | orchestrator | Wednesday 04 February 2026 01:47:33 +0000 (0:00:00.439) 0:05:06.669 **** 2026-02-04 01:47:40.950169 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:40.950175 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:40.950181 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:40.950187 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:40.950194 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:40.950201 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:40.950208 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:40.950215 | orchestrator | 2026-02-04 01:47:40.950221 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-04 01:47:40.950240 | orchestrator | Wednesday 04 February 2026 01:47:34 +0000 (0:00:00.322) 0:05:06.991 **** 2026-02-04 01:47:40.950247 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:47:40.950254 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:47:40.950261 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:47:40.950268 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:47:40.950275 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:47:40.950282 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:47:40.950289 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:47:40.950295 | orchestrator | 2026-02-04 01:47:40.950303 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-04 01:47:40.950311 | orchestrator | Wednesday 04 February 2026 01:47:34 +0000 (0:00:00.333) 0:05:07.324 **** 2026-02-04 01:47:40.950319 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:40.950325 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:40.950332 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:40.950338 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:40.950345 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:40.950351 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:40.950358 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:40.950364 | orchestrator | 2026-02-04 01:47:40.950372 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-04 01:47:40.950379 | orchestrator | Wednesday 04 February 2026 01:47:34 +0000 (0:00:00.330) 0:05:07.655 **** 2026-02-04 01:47:40.950385 | orchestrator | ok: [testbed-manager] =>  2026-02-04 01:47:40.950411 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 01:47:40.950419 | orchestrator | ok: [testbed-node-3] =>  2026-02-04 01:47:40.950425 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 01:47:40.950432 | orchestrator | ok: [testbed-node-4] =>  2026-02-04 01:47:40.950439 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 01:47:40.950446 | orchestrator | ok: [testbed-node-5] =>  2026-02-04 01:47:40.950454 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 01:47:40.950489 | orchestrator | ok: [testbed-node-0] =>  2026-02-04 01:47:40.950498 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 01:47:40.950505 | orchestrator | ok: [testbed-node-1] =>  2026-02-04 01:47:40.950513 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 01:47:40.950520 | orchestrator | ok: [testbed-node-2] =>  2026-02-04 01:47:40.950527 | orchestrator |  docker_version: 5:27.5.1 2026-02-04 01:47:40.950534 | orchestrator | 2026-02-04 01:47:40.950541 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-04 01:47:40.950549 | orchestrator | Wednesday 04 February 2026 01:47:35 +0000 (0:00:00.309) 0:05:07.965 **** 2026-02-04 01:47:40.950556 | orchestrator | ok: [testbed-manager] =>  2026-02-04 01:47:40.950563 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 01:47:40.950570 | orchestrator | ok: [testbed-node-3] =>  2026-02-04 01:47:40.950576 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 01:47:40.950583 | orchestrator | ok: [testbed-node-4] =>  2026-02-04 01:47:40.950590 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 01:47:40.950596 | orchestrator | ok: [testbed-node-5] =>  2026-02-04 01:47:40.950603 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 01:47:40.950608 | orchestrator | ok: [testbed-node-0] =>  2026-02-04 01:47:40.950615 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 01:47:40.950621 | orchestrator | ok: [testbed-node-1] =>  2026-02-04 01:47:40.950627 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 01:47:40.950634 | orchestrator | ok: [testbed-node-2] =>  2026-02-04 01:47:40.950640 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-04 01:47:40.950647 | orchestrator | 2026-02-04 01:47:40.950654 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-04 01:47:40.950660 | orchestrator | Wednesday 04 February 2026 01:47:35 +0000 (0:00:00.347) 0:05:08.312 **** 2026-02-04 01:47:40.950666 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:47:40.950672 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:47:40.950678 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:47:40.950684 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:47:40.950691 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:47:40.950697 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:47:40.950703 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:47:40.950709 | orchestrator | 2026-02-04 01:47:40.950716 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-04 01:47:40.950723 | orchestrator | Wednesday 04 February 2026 01:47:35 +0000 (0:00:00.283) 0:05:08.596 **** 2026-02-04 01:47:40.950729 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:47:40.950736 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:47:40.950742 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:47:40.950748 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:47:40.950754 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:47:40.950759 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:47:40.950765 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:47:40.950771 | orchestrator | 2026-02-04 01:47:40.950777 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-04 01:47:40.950783 | orchestrator | Wednesday 04 February 2026 01:47:36 +0000 (0:00:00.300) 0:05:08.897 **** 2026-02-04 01:47:40.950792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:47:40.950801 | orchestrator | 2026-02-04 01:47:40.950807 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-04 01:47:40.950814 | orchestrator | Wednesday 04 February 2026 01:47:36 +0000 (0:00:00.448) 0:05:09.345 **** 2026-02-04 01:47:40.950820 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:40.950826 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:40.950840 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:40.950846 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:40.950852 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:40.950865 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:40.950871 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:40.950877 | orchestrator | 2026-02-04 01:47:40.950883 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-04 01:47:40.950891 | orchestrator | Wednesday 04 February 2026 01:47:37 +0000 (0:00:00.947) 0:05:10.292 **** 2026-02-04 01:47:40.950895 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:47:40.950898 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:47:40.950902 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:47:40.950906 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:47:40.950909 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:47:40.950919 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:47:40.950923 | orchestrator | ok: [testbed-manager] 2026-02-04 01:47:40.950927 | orchestrator | 2026-02-04 01:47:40.950931 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-04 01:47:40.950936 | orchestrator | Wednesday 04 February 2026 01:47:40 +0000 (0:00:03.007) 0:05:13.300 **** 2026-02-04 01:47:40.950940 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-04 01:47:40.950944 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-04 01:47:40.950948 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-04 01:47:40.950952 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:47:40.950956 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-04 01:47:40.950960 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-04 01:47:40.950963 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-04 01:47:40.950967 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:47:40.950971 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-04 01:47:40.950974 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-04 01:47:40.950978 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-04 01:47:40.950982 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-04 01:47:40.950986 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-04 01:47:40.950989 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-04 01:47:40.950993 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:47:40.950997 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-04 01:47:40.951008 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-04 01:48:39.039585 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-04 01:48:39.039685 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:48:39.039697 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-04 01:48:39.039705 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-04 01:48:39.039712 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-04 01:48:39.039718 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:48:39.039725 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:48:39.039731 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-04 01:48:39.039737 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-04 01:48:39.039743 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-04 01:48:39.039750 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:48:39.039757 | orchestrator | 2026-02-04 01:48:39.039766 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-04 01:48:39.039775 | orchestrator | Wednesday 04 February 2026 01:47:41 +0000 (0:00:00.666) 0:05:13.966 **** 2026-02-04 01:48:39.039781 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.039788 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.039794 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.039801 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.039808 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.039815 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.039846 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.039851 | orchestrator | 2026-02-04 01:48:39.039855 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-04 01:48:39.039859 | orchestrator | Wednesday 04 February 2026 01:47:47 +0000 (0:00:06.485) 0:05:20.452 **** 2026-02-04 01:48:39.039863 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.039867 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.039871 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.039875 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.039879 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.039883 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.039887 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.039891 | orchestrator | 2026-02-04 01:48:39.039894 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-04 01:48:39.039898 | orchestrator | Wednesday 04 February 2026 01:47:48 +0000 (0:00:01.084) 0:05:21.536 **** 2026-02-04 01:48:39.039902 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.039906 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.039910 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.039914 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.039917 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.039921 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.039925 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.039929 | orchestrator | 2026-02-04 01:48:39.039933 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-04 01:48:39.039937 | orchestrator | Wednesday 04 February 2026 01:47:56 +0000 (0:00:07.982) 0:05:29.518 **** 2026-02-04 01:48:39.039941 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.039945 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.039948 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.039952 | orchestrator | changed: [testbed-manager] 2026-02-04 01:48:39.039956 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.039960 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.039964 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.039967 | orchestrator | 2026-02-04 01:48:39.039971 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-04 01:48:39.039975 | orchestrator | Wednesday 04 February 2026 01:47:59 +0000 (0:00:03.228) 0:05:32.747 **** 2026-02-04 01:48:39.039979 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.039983 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.039987 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.039990 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.039994 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.039998 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.040002 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.040006 | orchestrator | 2026-02-04 01:48:39.040010 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-04 01:48:39.040013 | orchestrator | Wednesday 04 February 2026 01:48:01 +0000 (0:00:01.306) 0:05:34.053 **** 2026-02-04 01:48:39.040017 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.040021 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.040025 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.040029 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.040033 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.040037 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.040041 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.040045 | orchestrator | 2026-02-04 01:48:39.040049 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-04 01:48:39.040053 | orchestrator | Wednesday 04 February 2026 01:48:02 +0000 (0:00:01.657) 0:05:35.711 **** 2026-02-04 01:48:39.040057 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:48:39.040061 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:48:39.040065 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:48:39.040068 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:48:39.040077 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:48:39.040081 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:48:39.040084 | orchestrator | changed: [testbed-manager] 2026-02-04 01:48:39.040088 | orchestrator | 2026-02-04 01:48:39.040092 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-04 01:48:39.040096 | orchestrator | Wednesday 04 February 2026 01:48:03 +0000 (0:00:00.692) 0:05:36.403 **** 2026-02-04 01:48:39.040100 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.040104 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.040108 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.040113 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.040119 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.040125 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.040131 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.040137 | orchestrator | 2026-02-04 01:48:39.040144 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-04 01:48:39.040167 | orchestrator | Wednesday 04 February 2026 01:48:12 +0000 (0:00:08.874) 0:05:45.278 **** 2026-02-04 01:48:39.040174 | orchestrator | changed: [testbed-manager] 2026-02-04 01:48:39.040181 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.040188 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.040194 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.040201 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.040207 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.040213 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.040220 | orchestrator | 2026-02-04 01:48:39.040228 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-04 01:48:39.040234 | orchestrator | Wednesday 04 February 2026 01:48:13 +0000 (0:00:00.908) 0:05:46.187 **** 2026-02-04 01:48:39.040240 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.040248 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.040254 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.040261 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.040267 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.040274 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.040280 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.040287 | orchestrator | 2026-02-04 01:48:39.040293 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-04 01:48:39.040300 | orchestrator | Wednesday 04 February 2026 01:48:21 +0000 (0:00:08.336) 0:05:54.523 **** 2026-02-04 01:48:39.040307 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.040313 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.040320 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.040327 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.040334 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.040341 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.040348 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.040355 | orchestrator | 2026-02-04 01:48:39.040362 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-04 01:48:39.040369 | orchestrator | Wednesday 04 February 2026 01:48:32 +0000 (0:00:11.011) 0:06:05.535 **** 2026-02-04 01:48:39.040375 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-04 01:48:39.040383 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-04 01:48:39.040389 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-04 01:48:39.040397 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-04 01:48:39.040405 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-04 01:48:39.040411 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-04 01:48:39.040417 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-04 01:48:39.040423 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-04 01:48:39.040430 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-04 01:48:39.040443 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-04 01:48:39.040449 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-04 01:48:39.040569 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-04 01:48:39.040582 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-04 01:48:39.040590 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-04 01:48:39.040596 | orchestrator | 2026-02-04 01:48:39.040604 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-04 01:48:39.040611 | orchestrator | Wednesday 04 February 2026 01:48:33 +0000 (0:00:01.242) 0:06:06.777 **** 2026-02-04 01:48:39.040618 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:48:39.040625 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:48:39.040632 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:48:39.040639 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:48:39.040645 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:48:39.040653 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:48:39.040659 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:48:39.040666 | orchestrator | 2026-02-04 01:48:39.040673 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-04 01:48:39.040679 | orchestrator | Wednesday 04 February 2026 01:48:34 +0000 (0:00:00.591) 0:06:07.368 **** 2026-02-04 01:48:39.040683 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:39.040688 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:39.040692 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:39.040696 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:39.040700 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:39.040706 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:39.040716 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:39.040722 | orchestrator | 2026-02-04 01:48:39.040728 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-04 01:48:39.040736 | orchestrator | Wednesday 04 February 2026 01:48:37 +0000 (0:00:03.415) 0:06:10.784 **** 2026-02-04 01:48:39.040743 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:48:39.040749 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:48:39.040756 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:48:39.040762 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:48:39.040769 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:48:39.040776 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:48:39.040782 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:48:39.040789 | orchestrator | 2026-02-04 01:48:39.040796 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-04 01:48:39.040803 | orchestrator | Wednesday 04 February 2026 01:48:38 +0000 (0:00:00.526) 0:06:11.310 **** 2026-02-04 01:48:39.040810 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-04 01:48:39.040817 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-04 01:48:39.040823 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:48:39.040830 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-04 01:48:39.040837 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-04 01:48:39.040844 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:48:39.040851 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-04 01:48:39.040857 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-04 01:48:39.040864 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:48:39.040882 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-04 01:48:58.625091 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-04 01:48:58.625176 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:48:58.625187 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-04 01:48:58.625194 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-04 01:48:58.625202 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:48:58.625233 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-04 01:48:58.625241 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-04 01:48:58.625247 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:48:58.625254 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-04 01:48:58.625261 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-04 01:48:58.625268 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:48:58.625275 | orchestrator | 2026-02-04 01:48:58.625284 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-04 01:48:58.625292 | orchestrator | Wednesday 04 February 2026 01:48:39 +0000 (0:00:00.789) 0:06:12.099 **** 2026-02-04 01:48:58.625298 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:48:58.625305 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:48:58.625312 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:48:58.625318 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:48:58.625324 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:48:58.625328 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:48:58.625332 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:48:58.625335 | orchestrator | 2026-02-04 01:48:58.625339 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-04 01:48:58.625344 | orchestrator | Wednesday 04 February 2026 01:48:39 +0000 (0:00:00.547) 0:06:12.646 **** 2026-02-04 01:48:58.625348 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:48:58.625352 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:48:58.625355 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:48:58.625359 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:48:58.625363 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:48:58.625367 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:48:58.625370 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:48:58.625374 | orchestrator | 2026-02-04 01:48:58.625378 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-04 01:48:58.625382 | orchestrator | Wednesday 04 February 2026 01:48:40 +0000 (0:00:00.530) 0:06:13.177 **** 2026-02-04 01:48:58.625386 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:48:58.625389 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:48:58.625393 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:48:58.625397 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:48:58.625401 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:48:58.625404 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:48:58.625408 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:48:58.625412 | orchestrator | 2026-02-04 01:48:58.625416 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-04 01:48:58.625422 | orchestrator | Wednesday 04 February 2026 01:48:40 +0000 (0:00:00.599) 0:06:13.776 **** 2026-02-04 01:48:58.625428 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.625434 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:48:58.625440 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:48:58.625446 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:48:58.625451 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:48:58.625457 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:48:58.625463 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:48:58.625468 | orchestrator | 2026-02-04 01:48:58.625474 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-04 01:48:58.625479 | orchestrator | Wednesday 04 February 2026 01:48:42 +0000 (0:00:01.701) 0:06:15.478 **** 2026-02-04 01:48:58.625486 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:48:58.625495 | orchestrator | 2026-02-04 01:48:58.625501 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-04 01:48:58.625508 | orchestrator | Wednesday 04 February 2026 01:48:43 +0000 (0:00:00.940) 0:06:16.419 **** 2026-02-04 01:48:58.625579 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.625594 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:58.625600 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:58.625607 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:58.625613 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:58.625617 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:58.625620 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:58.625624 | orchestrator | 2026-02-04 01:48:58.625628 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-04 01:48:58.625632 | orchestrator | Wednesday 04 February 2026 01:48:44 +0000 (0:00:00.869) 0:06:17.288 **** 2026-02-04 01:48:58.625636 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.625640 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:58.625644 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:58.625648 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:58.625652 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:58.625655 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:58.625660 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:58.625665 | orchestrator | 2026-02-04 01:48:58.625669 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-04 01:48:58.625674 | orchestrator | Wednesday 04 February 2026 01:48:45 +0000 (0:00:00.882) 0:06:18.170 **** 2026-02-04 01:48:58.625678 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.625683 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:58.625687 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:58.625692 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:58.625697 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:58.625701 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:58.625705 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:58.625710 | orchestrator | 2026-02-04 01:48:58.625714 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-04 01:48:58.625737 | orchestrator | Wednesday 04 February 2026 01:48:47 +0000 (0:00:01.661) 0:06:19.832 **** 2026-02-04 01:48:58.625745 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:48:58.625752 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:48:58.625758 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:48:58.625765 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:48:58.625771 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:48:58.625778 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:48:58.625784 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:48:58.625789 | orchestrator | 2026-02-04 01:48:58.625793 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-04 01:48:58.625797 | orchestrator | Wednesday 04 February 2026 01:48:48 +0000 (0:00:01.371) 0:06:21.204 **** 2026-02-04 01:48:58.625800 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.625804 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:58.625808 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:58.625812 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:58.625816 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:58.625820 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:58.625823 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:58.625827 | orchestrator | 2026-02-04 01:48:58.625831 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-04 01:48:58.625835 | orchestrator | Wednesday 04 February 2026 01:48:49 +0000 (0:00:01.294) 0:06:22.498 **** 2026-02-04 01:48:58.625839 | orchestrator | changed: [testbed-manager] 2026-02-04 01:48:58.625842 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:48:58.625846 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:48:58.625850 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:48:58.625854 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:48:58.625857 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:48:58.625861 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:48:58.625865 | orchestrator | 2026-02-04 01:48:58.625874 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-04 01:48:58.625878 | orchestrator | Wednesday 04 February 2026 01:48:51 +0000 (0:00:01.377) 0:06:23.876 **** 2026-02-04 01:48:58.625882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:48:58.625886 | orchestrator | 2026-02-04 01:48:58.625890 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-04 01:48:58.625894 | orchestrator | Wednesday 04 February 2026 01:48:52 +0000 (0:00:01.135) 0:06:25.012 **** 2026-02-04 01:48:58.625898 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:48:58.625902 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:48:58.625906 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.625909 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:48:58.625913 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:48:58.625917 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:48:58.625921 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:48:58.625925 | orchestrator | 2026-02-04 01:48:58.625928 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-04 01:48:58.625932 | orchestrator | Wednesday 04 February 2026 01:48:53 +0000 (0:00:01.346) 0:06:26.358 **** 2026-02-04 01:48:58.625936 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.625940 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:48:58.625943 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:48:58.625947 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:48:58.625951 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:48:58.625955 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:48:58.625958 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:48:58.625962 | orchestrator | 2026-02-04 01:48:58.625966 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-04 01:48:58.625970 | orchestrator | Wednesday 04 February 2026 01:48:54 +0000 (0:00:01.132) 0:06:27.490 **** 2026-02-04 01:48:58.625974 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.625978 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:48:58.625981 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:48:58.625985 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:48:58.625989 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:48:58.625993 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:48:58.625996 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:48:58.626000 | orchestrator | 2026-02-04 01:48:58.626004 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-04 01:48:58.626008 | orchestrator | Wednesday 04 February 2026 01:48:55 +0000 (0:00:01.154) 0:06:28.645 **** 2026-02-04 01:48:58.626012 | orchestrator | ok: [testbed-manager] 2026-02-04 01:48:58.626064 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:48:58.626069 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:48:58.626073 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:48:58.626076 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:48:58.626080 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:48:58.626084 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:48:58.626088 | orchestrator | 2026-02-04 01:48:58.626092 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-04 01:48:58.626096 | orchestrator | Wednesday 04 February 2026 01:48:57 +0000 (0:00:01.419) 0:06:30.064 **** 2026-02-04 01:48:58.626100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:48:58.626104 | orchestrator | 2026-02-04 01:48:58.626107 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 01:48:58.626111 | orchestrator | Wednesday 04 February 2026 01:48:58 +0000 (0:00:01.009) 0:06:31.073 **** 2026-02-04 01:48:58.626115 | orchestrator | 2026-02-04 01:48:58.626119 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 01:48:58.626126 | orchestrator | Wednesday 04 February 2026 01:48:58 +0000 (0:00:00.052) 0:06:31.126 **** 2026-02-04 01:48:58.626130 | orchestrator | 2026-02-04 01:48:58.626134 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 01:48:58.626138 | orchestrator | Wednesday 04 February 2026 01:48:58 +0000 (0:00:00.039) 0:06:31.166 **** 2026-02-04 01:48:58.626142 | orchestrator | 2026-02-04 01:48:58.626146 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 01:48:58.626154 | orchestrator | Wednesday 04 February 2026 01:48:58 +0000 (0:00:00.050) 0:06:31.217 **** 2026-02-04 01:49:24.309347 | orchestrator | 2026-02-04 01:49:24.309455 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 01:49:24.309477 | orchestrator | Wednesday 04 February 2026 01:48:58 +0000 (0:00:00.042) 0:06:31.259 **** 2026-02-04 01:49:24.309492 | orchestrator | 2026-02-04 01:49:24.309507 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 01:49:24.309521 | orchestrator | Wednesday 04 February 2026 01:48:58 +0000 (0:00:00.046) 0:06:31.305 **** 2026-02-04 01:49:24.309536 | orchestrator | 2026-02-04 01:49:24.309549 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-04 01:49:24.309589 | orchestrator | Wednesday 04 February 2026 01:48:58 +0000 (0:00:00.052) 0:06:31.358 **** 2026-02-04 01:49:24.309603 | orchestrator | 2026-02-04 01:49:24.309617 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-04 01:49:24.309632 | orchestrator | Wednesday 04 February 2026 01:48:58 +0000 (0:00:00.048) 0:06:31.406 **** 2026-02-04 01:49:24.309648 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:24.309664 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:24.309679 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:24.309694 | orchestrator | 2026-02-04 01:49:24.309709 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-04 01:49:24.309725 | orchestrator | Wednesday 04 February 2026 01:48:59 +0000 (0:00:01.105) 0:06:32.512 **** 2026-02-04 01:49:24.309739 | orchestrator | changed: [testbed-manager] 2026-02-04 01:49:24.309754 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:24.309763 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:24.309772 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:24.309782 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:24.309790 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:24.309799 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:24.309808 | orchestrator | 2026-02-04 01:49:24.309817 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-04 01:49:24.309826 | orchestrator | Wednesday 04 February 2026 01:49:01 +0000 (0:00:01.526) 0:06:34.039 **** 2026-02-04 01:49:24.309835 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:24.309843 | orchestrator | changed: [testbed-manager] 2026-02-04 01:49:24.309852 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:24.309861 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:24.309870 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:24.309878 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:24.309887 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:24.309895 | orchestrator | 2026-02-04 01:49:24.309904 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-04 01:49:24.309913 | orchestrator | Wednesday 04 February 2026 01:49:02 +0000 (0:00:01.186) 0:06:35.225 **** 2026-02-04 01:49:24.309922 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:49:24.309930 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:24.309939 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:24.309947 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:24.309956 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:24.309965 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:24.309974 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:24.309982 | orchestrator | 2026-02-04 01:49:24.309991 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-04 01:49:24.310000 | orchestrator | Wednesday 04 February 2026 01:49:04 +0000 (0:00:02.402) 0:06:37.627 **** 2026-02-04 01:49:24.310089 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:49:24.310100 | orchestrator | 2026-02-04 01:49:24.310109 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-04 01:49:24.310118 | orchestrator | Wednesday 04 February 2026 01:49:04 +0000 (0:00:00.117) 0:06:37.745 **** 2026-02-04 01:49:24.310127 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:24.310136 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:24.310144 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:24.310153 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:24.310162 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:24.310170 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:24.310179 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:24.310188 | orchestrator | 2026-02-04 01:49:24.310197 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-04 01:49:24.310206 | orchestrator | Wednesday 04 February 2026 01:49:05 +0000 (0:00:01.042) 0:06:38.787 **** 2026-02-04 01:49:24.310215 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:49:24.310237 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:49:24.310246 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:49:24.310255 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:49:24.310266 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:49:24.310280 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:49:24.310301 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:49:24.310318 | orchestrator | 2026-02-04 01:49:24.310332 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-04 01:49:24.310346 | orchestrator | Wednesday 04 February 2026 01:49:06 +0000 (0:00:00.623) 0:06:39.410 **** 2026-02-04 01:49:24.310363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:49:24.310380 | orchestrator | 2026-02-04 01:49:24.310395 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-04 01:49:24.310410 | orchestrator | Wednesday 04 February 2026 01:49:07 +0000 (0:00:01.188) 0:06:40.598 **** 2026-02-04 01:49:24.310419 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:24.310428 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:24.310437 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:24.310445 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:24.310454 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:24.310463 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:24.310472 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:24.310481 | orchestrator | 2026-02-04 01:49:24.310490 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-04 01:49:24.310498 | orchestrator | Wednesday 04 February 2026 01:49:08 +0000 (0:00:00.885) 0:06:41.484 **** 2026-02-04 01:49:24.310507 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-04 01:49:24.310536 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-04 01:49:24.310546 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-04 01:49:24.310704 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-04 01:49:24.310733 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-04 01:49:24.310742 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-04 01:49:24.310751 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-04 01:49:24.310760 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-04 01:49:24.310769 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-04 01:49:24.310777 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-04 01:49:24.310786 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-04 01:49:24.310795 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-04 01:49:24.310816 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-04 01:49:24.310825 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-04 01:49:24.310847 | orchestrator | 2026-02-04 01:49:24.310865 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-04 01:49:24.310874 | orchestrator | Wednesday 04 February 2026 01:49:10 +0000 (0:00:02.303) 0:06:43.788 **** 2026-02-04 01:49:24.310883 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:49:24.310892 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:49:24.310900 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:49:24.310909 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:49:24.310918 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:49:24.310926 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:49:24.310935 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:49:24.310943 | orchestrator | 2026-02-04 01:49:24.310952 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-04 01:49:24.310961 | orchestrator | Wednesday 04 February 2026 01:49:11 +0000 (0:00:00.790) 0:06:44.579 **** 2026-02-04 01:49:24.310972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:49:24.310983 | orchestrator | 2026-02-04 01:49:24.310992 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-04 01:49:24.311001 | orchestrator | Wednesday 04 February 2026 01:49:12 +0000 (0:00:00.896) 0:06:45.475 **** 2026-02-04 01:49:24.311009 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:24.311020 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:24.311034 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:24.311048 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:24.311063 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:24.311078 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:24.311093 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:24.311108 | orchestrator | 2026-02-04 01:49:24.311123 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-04 01:49:24.311139 | orchestrator | Wednesday 04 February 2026 01:49:13 +0000 (0:00:00.847) 0:06:46.322 **** 2026-02-04 01:49:24.311154 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:24.311168 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:24.311182 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:24.311196 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:24.311210 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:24.311223 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:24.311238 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:24.311253 | orchestrator | 2026-02-04 01:49:24.311268 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-04 01:49:24.311282 | orchestrator | Wednesday 04 February 2026 01:49:14 +0000 (0:00:01.078) 0:06:47.401 **** 2026-02-04 01:49:24.311297 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:49:24.311312 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:49:24.311327 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:49:24.311341 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:49:24.311356 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:49:24.311371 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:49:24.311386 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:49:24.311401 | orchestrator | 2026-02-04 01:49:24.311412 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-04 01:49:24.311422 | orchestrator | Wednesday 04 February 2026 01:49:15 +0000 (0:00:00.570) 0:06:47.972 **** 2026-02-04 01:49:24.311437 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:24.311451 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:24.311466 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:24.311481 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:24.311495 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:24.311517 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:24.311526 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:24.311535 | orchestrator | 2026-02-04 01:49:24.311544 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-04 01:49:24.311553 | orchestrator | Wednesday 04 February 2026 01:49:16 +0000 (0:00:01.410) 0:06:49.382 **** 2026-02-04 01:49:24.311597 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:49:24.311606 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:49:24.311616 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:49:24.311625 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:49:24.311633 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:49:24.311642 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:49:24.311651 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:49:24.311659 | orchestrator | 2026-02-04 01:49:24.311668 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-04 01:49:24.311678 | orchestrator | Wednesday 04 February 2026 01:49:17 +0000 (0:00:00.566) 0:06:49.949 **** 2026-02-04 01:49:24.311686 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:24.311695 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:24.311704 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:24.311712 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:24.311722 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:24.311737 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:24.311767 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:57.583109 | orchestrator | 2026-02-04 01:49:57.583207 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-04 01:49:57.583221 | orchestrator | Wednesday 04 February 2026 01:49:24 +0000 (0:00:07.147) 0:06:57.097 **** 2026-02-04 01:49:57.583231 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.583241 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:57.583251 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:57.583260 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:57.583269 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:57.583279 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:57.583292 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:57.583306 | orchestrator | 2026-02-04 01:49:57.583321 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-04 01:49:57.583336 | orchestrator | Wednesday 04 February 2026 01:49:25 +0000 (0:00:01.548) 0:06:58.645 **** 2026-02-04 01:49:57.583351 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.583367 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:57.583378 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:57.583387 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:57.583396 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:57.583405 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:57.583414 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:57.583422 | orchestrator | 2026-02-04 01:49:57.583431 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-04 01:49:57.583441 | orchestrator | Wednesday 04 February 2026 01:49:27 +0000 (0:00:01.709) 0:07:00.355 **** 2026-02-04 01:49:57.583450 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.583459 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:57.583468 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:57.583476 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:57.583485 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:57.583494 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:57.583503 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:57.583511 | orchestrator | 2026-02-04 01:49:57.583520 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 01:49:57.583529 | orchestrator | Wednesday 04 February 2026 01:49:29 +0000 (0:00:01.707) 0:07:02.063 **** 2026-02-04 01:49:57.583538 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.583547 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:57.583556 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:57.583588 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:57.583629 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:57.583645 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:57.583661 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:57.583677 | orchestrator | 2026-02-04 01:49:57.583693 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 01:49:57.583709 | orchestrator | Wednesday 04 February 2026 01:49:30 +0000 (0:00:00.870) 0:07:02.933 **** 2026-02-04 01:49:57.583720 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:49:57.583731 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:49:57.583741 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:49:57.583752 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:49:57.583761 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:49:57.583771 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:49:57.583782 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:49:57.583792 | orchestrator | 2026-02-04 01:49:57.583802 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-04 01:49:57.583813 | orchestrator | Wednesday 04 February 2026 01:49:31 +0000 (0:00:01.126) 0:07:04.059 **** 2026-02-04 01:49:57.583823 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:49:57.583833 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:49:57.583843 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:49:57.583853 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:49:57.583863 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:49:57.583874 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:49:57.583884 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:49:57.583894 | orchestrator | 2026-02-04 01:49:57.583905 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-04 01:49:57.583915 | orchestrator | Wednesday 04 February 2026 01:49:31 +0000 (0:00:00.560) 0:07:04.620 **** 2026-02-04 01:49:57.583926 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.583952 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:57.583963 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:57.583974 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:57.583984 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:57.583994 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:57.584008 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:57.584018 | orchestrator | 2026-02-04 01:49:57.584027 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-04 01:49:57.584036 | orchestrator | Wednesday 04 February 2026 01:49:32 +0000 (0:00:00.647) 0:07:05.268 **** 2026-02-04 01:49:57.584044 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.584053 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:57.584062 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:57.584071 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:57.584080 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:57.584088 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:57.584097 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:57.584106 | orchestrator | 2026-02-04 01:49:57.584115 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-04 01:49:57.584124 | orchestrator | Wednesday 04 February 2026 01:49:33 +0000 (0:00:00.661) 0:07:05.929 **** 2026-02-04 01:49:57.584132 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.584141 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:57.584149 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:57.584158 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:57.584167 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:57.584175 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:57.584184 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:57.584192 | orchestrator | 2026-02-04 01:49:57.584201 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-04 01:49:57.584210 | orchestrator | Wednesday 04 February 2026 01:49:33 +0000 (0:00:00.834) 0:07:06.764 **** 2026-02-04 01:49:57.584219 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.584227 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:57.584247 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:57.584256 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:57.584265 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:57.584273 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:57.584282 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:57.584290 | orchestrator | 2026-02-04 01:49:57.584316 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-04 01:49:57.584325 | orchestrator | Wednesday 04 February 2026 01:49:39 +0000 (0:00:05.490) 0:07:12.254 **** 2026-02-04 01:49:57.584334 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:49:57.584343 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:49:57.584352 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:49:57.584361 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:49:57.584369 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:49:57.584378 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:49:57.584387 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:49:57.584396 | orchestrator | 2026-02-04 01:49:57.584404 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-04 01:49:57.584413 | orchestrator | Wednesday 04 February 2026 01:49:40 +0000 (0:00:00.562) 0:07:12.817 **** 2026-02-04 01:49:57.584424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:49:57.584436 | orchestrator | 2026-02-04 01:49:57.584445 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-04 01:49:57.584454 | orchestrator | Wednesday 04 February 2026 01:49:41 +0000 (0:00:01.101) 0:07:13.919 **** 2026-02-04 01:49:57.584462 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:57.584471 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.584480 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:57.584489 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:57.584497 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:57.584506 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:57.584515 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:57.584523 | orchestrator | 2026-02-04 01:49:57.584532 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-04 01:49:57.584541 | orchestrator | Wednesday 04 February 2026 01:49:43 +0000 (0:00:01.886) 0:07:15.806 **** 2026-02-04 01:49:57.584550 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.584558 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:57.584567 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:57.584576 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:57.584589 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:57.584668 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:57.584679 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:57.584688 | orchestrator | 2026-02-04 01:49:57.584696 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-04 01:49:57.584705 | orchestrator | Wednesday 04 February 2026 01:49:44 +0000 (0:00:01.129) 0:07:16.935 **** 2026-02-04 01:49:57.584714 | orchestrator | ok: [testbed-manager] 2026-02-04 01:49:57.584723 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:49:57.584732 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:49:57.584740 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:49:57.584749 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:49:57.584758 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:49:57.584767 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:49:57.584775 | orchestrator | 2026-02-04 01:49:57.584784 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-04 01:49:57.584793 | orchestrator | Wednesday 04 February 2026 01:49:44 +0000 (0:00:00.844) 0:07:17.780 **** 2026-02-04 01:49:57.584802 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 01:49:57.584813 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 01:49:57.584830 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 01:49:57.584839 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 01:49:57.584853 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 01:49:57.584862 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 01:49:57.584871 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-04 01:49:57.584880 | orchestrator | 2026-02-04 01:49:57.584889 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-04 01:49:57.584898 | orchestrator | Wednesday 04 February 2026 01:49:46 +0000 (0:00:01.897) 0:07:19.677 **** 2026-02-04 01:49:57.584907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:49:57.584916 | orchestrator | 2026-02-04 01:49:57.584925 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-04 01:49:57.584934 | orchestrator | Wednesday 04 February 2026 01:49:47 +0000 (0:00:00.955) 0:07:20.633 **** 2026-02-04 01:49:57.584943 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:49:57.584952 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:49:57.584961 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:49:57.584969 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:49:57.584978 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:49:57.584987 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:49:57.584996 | orchestrator | changed: [testbed-manager] 2026-02-04 01:49:57.585004 | orchestrator | 2026-02-04 01:49:57.585020 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-04 01:50:29.793352 | orchestrator | Wednesday 04 February 2026 01:49:57 +0000 (0:00:09.736) 0:07:30.369 **** 2026-02-04 01:50:29.793463 | orchestrator | ok: [testbed-manager] 2026-02-04 01:50:29.793480 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:50:29.793492 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:50:29.793503 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:50:29.793515 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:50:29.793525 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:50:29.793536 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:50:29.793547 | orchestrator | 2026-02-04 01:50:29.793559 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-04 01:50:29.793571 | orchestrator | Wednesday 04 February 2026 01:49:59 +0000 (0:00:02.003) 0:07:32.373 **** 2026-02-04 01:50:29.793582 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:50:29.793593 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:50:29.793604 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:50:29.793615 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:50:29.793627 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:50:29.793667 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:50:29.793687 | orchestrator | 2026-02-04 01:50:29.793707 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-04 01:50:29.793735 | orchestrator | Wednesday 04 February 2026 01:50:00 +0000 (0:00:01.272) 0:07:33.646 **** 2026-02-04 01:50:29.793755 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.793775 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.793793 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.793810 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.793826 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.793891 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.793912 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.793931 | orchestrator | 2026-02-04 01:50:29.793952 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-04 01:50:29.793973 | orchestrator | 2026-02-04 01:50:29.793994 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-04 01:50:29.794013 | orchestrator | Wednesday 04 February 2026 01:50:02 +0000 (0:00:01.247) 0:07:34.893 **** 2026-02-04 01:50:29.794071 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:50:29.794085 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:50:29.794098 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:50:29.794111 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:50:29.794123 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:50:29.794136 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:50:29.794149 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:50:29.794162 | orchestrator | 2026-02-04 01:50:29.794175 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-04 01:50:29.794188 | orchestrator | 2026-02-04 01:50:29.794201 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-04 01:50:29.794215 | orchestrator | Wednesday 04 February 2026 01:50:02 +0000 (0:00:00.826) 0:07:35.719 **** 2026-02-04 01:50:29.794228 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.794241 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.794254 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.794265 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.794276 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.794287 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.794297 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.794308 | orchestrator | 2026-02-04 01:50:29.794319 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-04 01:50:29.794331 | orchestrator | Wednesday 04 February 2026 01:50:04 +0000 (0:00:01.449) 0:07:37.169 **** 2026-02-04 01:50:29.794342 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:50:29.794352 | orchestrator | ok: [testbed-manager] 2026-02-04 01:50:29.794363 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:50:29.794374 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:50:29.794385 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:50:29.794396 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:50:29.794407 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:50:29.794417 | orchestrator | 2026-02-04 01:50:29.794428 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-04 01:50:29.794439 | orchestrator | Wednesday 04 February 2026 01:50:05 +0000 (0:00:01.550) 0:07:38.720 **** 2026-02-04 01:50:29.794451 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:50:29.794461 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:50:29.794472 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:50:29.794483 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:50:29.794494 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:50:29.794520 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:50:29.794532 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:50:29.794543 | orchestrator | 2026-02-04 01:50:29.794554 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-04 01:50:29.794568 | orchestrator | Wednesday 04 February 2026 01:50:06 +0000 (0:00:00.597) 0:07:39.318 **** 2026-02-04 01:50:29.794589 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:50:29.794605 | orchestrator | 2026-02-04 01:50:29.794616 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-04 01:50:29.794627 | orchestrator | Wednesday 04 February 2026 01:50:07 +0000 (0:00:01.171) 0:07:40.490 **** 2026-02-04 01:50:29.794703 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:50:29.794732 | orchestrator | 2026-02-04 01:50:29.794744 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-04 01:50:29.794755 | orchestrator | Wednesday 04 February 2026 01:50:08 +0000 (0:00:00.892) 0:07:41.383 **** 2026-02-04 01:50:29.794767 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.794777 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.794788 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.794799 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.794810 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.794821 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.794832 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.794843 | orchestrator | 2026-02-04 01:50:29.794880 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-04 01:50:29.794892 | orchestrator | Wednesday 04 February 2026 01:50:17 +0000 (0:00:08.935) 0:07:50.318 **** 2026-02-04 01:50:29.794904 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.794916 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.794927 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.794938 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.794949 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.794960 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.794972 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.794983 | orchestrator | 2026-02-04 01:50:29.794994 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-04 01:50:29.795005 | orchestrator | Wednesday 04 February 2026 01:50:18 +0000 (0:00:01.079) 0:07:51.398 **** 2026-02-04 01:50:29.795016 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.795027 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.795037 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.795049 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.795060 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.795070 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.795081 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.795092 | orchestrator | 2026-02-04 01:50:29.795103 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-04 01:50:29.795114 | orchestrator | Wednesday 04 February 2026 01:50:19 +0000 (0:00:01.402) 0:07:52.800 **** 2026-02-04 01:50:29.795125 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.795136 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.795147 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.795158 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.795169 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.795180 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.795190 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.795201 | orchestrator | 2026-02-04 01:50:29.795213 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-04 01:50:29.795224 | orchestrator | Wednesday 04 February 2026 01:50:22 +0000 (0:00:02.028) 0:07:54.829 **** 2026-02-04 01:50:29.795235 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.795246 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.795256 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.795267 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.795278 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.795289 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.795300 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.795311 | orchestrator | 2026-02-04 01:50:29.795323 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-04 01:50:29.795333 | orchestrator | Wednesday 04 February 2026 01:50:23 +0000 (0:00:01.288) 0:07:56.118 **** 2026-02-04 01:50:29.795345 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.795356 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.795375 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.795386 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.795397 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.795408 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.795419 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.795430 | orchestrator | 2026-02-04 01:50:29.795441 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-04 01:50:29.795452 | orchestrator | 2026-02-04 01:50:29.795463 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-04 01:50:29.795475 | orchestrator | Wednesday 04 February 2026 01:50:24 +0000 (0:00:01.174) 0:07:57.292 **** 2026-02-04 01:50:29.795486 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:50:29.795497 | orchestrator | 2026-02-04 01:50:29.795508 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-04 01:50:29.795519 | orchestrator | Wednesday 04 February 2026 01:50:25 +0000 (0:00:00.920) 0:07:58.213 **** 2026-02-04 01:50:29.795530 | orchestrator | ok: [testbed-manager] 2026-02-04 01:50:29.795541 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:50:29.795553 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:50:29.795564 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:50:29.795575 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:50:29.795586 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:50:29.795613 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:50:29.795624 | orchestrator | 2026-02-04 01:50:29.795655 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-04 01:50:29.795667 | orchestrator | Wednesday 04 February 2026 01:50:26 +0000 (0:00:01.104) 0:07:59.318 **** 2026-02-04 01:50:29.795679 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:29.795690 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:29.795701 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:29.795712 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:29.795722 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:29.795733 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:29.795744 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:29.795755 | orchestrator | 2026-02-04 01:50:29.795766 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-04 01:50:29.795777 | orchestrator | Wednesday 04 February 2026 01:50:27 +0000 (0:00:01.268) 0:08:00.587 **** 2026-02-04 01:50:29.795788 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 01:50:29.795800 | orchestrator | 2026-02-04 01:50:29.795810 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-04 01:50:29.795821 | orchestrator | Wednesday 04 February 2026 01:50:28 +0000 (0:00:01.113) 0:08:01.701 **** 2026-02-04 01:50:29.795832 | orchestrator | ok: [testbed-manager] 2026-02-04 01:50:29.795843 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:50:29.795855 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:50:29.795865 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:50:29.795876 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:50:29.795887 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:50:29.795898 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:50:29.795909 | orchestrator | 2026-02-04 01:50:29.795928 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-04 01:50:31.560174 | orchestrator | Wednesday 04 February 2026 01:50:29 +0000 (0:00:00.874) 0:08:02.575 **** 2026-02-04 01:50:31.560282 | orchestrator | changed: [testbed-manager] 2026-02-04 01:50:31.560305 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:50:31.560332 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:50:31.560352 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:50:31.560371 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:50:31.560389 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:50:31.560407 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:50:31.560460 | orchestrator | 2026-02-04 01:50:31.560482 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:50:31.560503 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-04 01:50:31.560524 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 01:50:31.560541 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 01:50:31.560559 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 01:50:31.560578 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-04 01:50:31.560596 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-04 01:50:31.560612 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-04 01:50:31.560623 | orchestrator | 2026-02-04 01:50:31.560634 | orchestrator | 2026-02-04 01:50:31.560745 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:50:31.560759 | orchestrator | Wednesday 04 February 2026 01:50:30 +0000 (0:00:01.176) 0:08:03.752 **** 2026-02-04 01:50:31.560773 | orchestrator | =============================================================================== 2026-02-04 01:50:31.560786 | orchestrator | osism.commons.packages : Install required packages --------------------- 73.55s 2026-02-04 01:50:31.560799 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.41s 2026-02-04 01:50:31.560813 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.38s 2026-02-04 01:50:31.560826 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.00s 2026-02-04 01:50:31.560839 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.77s 2026-02-04 01:50:31.560854 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.73s 2026-02-04 01:50:31.560866 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.01s 2026-02-04 01:50:31.560879 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.74s 2026-02-04 01:50:31.560892 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.94s 2026-02-04 01:50:31.560905 | orchestrator | osism.services.docker : Install containerd package ---------------------- 8.87s 2026-02-04 01:50:31.560916 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.34s 2026-02-04 01:50:31.560927 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.98s 2026-02-04 01:50:31.560938 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.62s 2026-02-04 01:50:31.560964 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.59s 2026-02-04 01:50:31.560976 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.15s 2026-02-04 01:50:31.560987 | orchestrator | osism.services.rng : Install rng package -------------------------------- 6.69s 2026-02-04 01:50:31.560998 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.49s 2026-02-04 01:50:31.561009 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.25s 2026-02-04 01:50:31.561020 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.87s 2026-02-04 01:50:31.561031 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.49s 2026-02-04 01:50:31.918148 | orchestrator | + osism apply fail2ban 2026-02-04 01:50:45.251176 | orchestrator | 2026-02-04 01:50:45 | INFO  | Task a8d50292-ba75-4600-9fa8-4117aa0401dc (fail2ban) was prepared for execution. 2026-02-04 01:50:45.251254 | orchestrator | 2026-02-04 01:50:45 | INFO  | It takes a moment until task a8d50292-ba75-4600-9fa8-4117aa0401dc (fail2ban) has been started and output is visible here. 2026-02-04 01:51:08.523553 | orchestrator | 2026-02-04 01:51:08.523668 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-04 01:51:08.523797 | orchestrator | 2026-02-04 01:51:08.523812 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-04 01:51:08.523824 | orchestrator | Wednesday 04 February 2026 01:50:50 +0000 (0:00:00.296) 0:00:00.296 **** 2026-02-04 01:51:08.523837 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:51:08.523851 | orchestrator | 2026-02-04 01:51:08.523863 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-04 01:51:08.523874 | orchestrator | Wednesday 04 February 2026 01:50:51 +0000 (0:00:01.267) 0:00:01.563 **** 2026-02-04 01:51:08.523886 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:51:08.523898 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:51:08.523909 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:51:08.523920 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:51:08.523931 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:51:08.523942 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:51:08.523953 | orchestrator | changed: [testbed-manager] 2026-02-04 01:51:08.523965 | orchestrator | 2026-02-04 01:51:08.523977 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-04 01:51:08.523988 | orchestrator | Wednesday 04 February 2026 01:51:03 +0000 (0:00:11.824) 0:00:13.388 **** 2026-02-04 01:51:08.523999 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:51:08.524010 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:51:08.524022 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:51:08.524033 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:51:08.524044 | orchestrator | changed: [testbed-manager] 2026-02-04 01:51:08.524055 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:51:08.524066 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:51:08.524077 | orchestrator | 2026-02-04 01:51:08.524088 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-04 01:51:08.524101 | orchestrator | Wednesday 04 February 2026 01:51:04 +0000 (0:00:01.502) 0:00:14.891 **** 2026-02-04 01:51:08.524114 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:51:08.524128 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:51:08.524142 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:51:08.524156 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:51:08.524169 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:51:08.524181 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:08.524194 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:51:08.524208 | orchestrator | 2026-02-04 01:51:08.524221 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-04 01:51:08.524235 | orchestrator | Wednesday 04 February 2026 01:51:06 +0000 (0:00:01.479) 0:00:16.370 **** 2026-02-04 01:51:08.524248 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:51:08.524261 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:51:08.524274 | orchestrator | changed: [testbed-manager] 2026-02-04 01:51:08.524288 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:51:08.524301 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:51:08.524314 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:51:08.524327 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:51:08.524340 | orchestrator | 2026-02-04 01:51:08.524353 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:51:08.524366 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:51:08.524415 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:51:08.524436 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:51:08.524454 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:51:08.524471 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:51:08.524489 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:51:08.524509 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:51:08.524527 | orchestrator | 2026-02-04 01:51:08.524545 | orchestrator | 2026-02-04 01:51:08.524563 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:51:08.524575 | orchestrator | Wednesday 04 February 2026 01:51:08 +0000 (0:00:01.624) 0:00:17.995 **** 2026-02-04 01:51:08.524586 | orchestrator | =============================================================================== 2026-02-04 01:51:08.524597 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.82s 2026-02-04 01:51:08.524608 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.62s 2026-02-04 01:51:08.524619 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.50s 2026-02-04 01:51:08.524630 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.48s 2026-02-04 01:51:08.524641 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.27s 2026-02-04 01:51:08.849656 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-04 01:51:08.849826 | orchestrator | + osism apply network 2026-02-04 01:51:21.115780 | orchestrator | 2026-02-04 01:51:21 | INFO  | Task d31f47c8-c2a6-48bf-9fe2-83a5003309c2 (network) was prepared for execution. 2026-02-04 01:51:21.115885 | orchestrator | 2026-02-04 01:51:21 | INFO  | It takes a moment until task d31f47c8-c2a6-48bf-9fe2-83a5003309c2 (network) has been started and output is visible here. 2026-02-04 01:51:51.760977 | orchestrator | 2026-02-04 01:51:51.761074 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-04 01:51:51.761083 | orchestrator | 2026-02-04 01:51:51.761088 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-04 01:51:51.761092 | orchestrator | Wednesday 04 February 2026 01:51:25 +0000 (0:00:00.295) 0:00:00.295 **** 2026-02-04 01:51:51.761097 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:51.761102 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:51:51.761106 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:51:51.761110 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:51:51.761114 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:51:51.761118 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:51:51.761123 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:51:51.761126 | orchestrator | 2026-02-04 01:51:51.761130 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-04 01:51:51.761135 | orchestrator | Wednesday 04 February 2026 01:51:26 +0000 (0:00:00.820) 0:00:01.115 **** 2026-02-04 01:51:51.761141 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:51:51.761146 | orchestrator | 2026-02-04 01:51:51.761150 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-04 01:51:51.761172 | orchestrator | Wednesday 04 February 2026 01:51:27 +0000 (0:00:01.348) 0:00:02.464 **** 2026-02-04 01:51:51.761176 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:51:51.761180 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:51:51.761184 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:51:51.761187 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:51.761191 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:51:51.761195 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:51:51.761199 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:51:51.761202 | orchestrator | 2026-02-04 01:51:51.761206 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-04 01:51:51.761210 | orchestrator | Wednesday 04 February 2026 01:51:29 +0000 (0:00:01.947) 0:00:04.412 **** 2026-02-04 01:51:51.761214 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:51:51.761218 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:51.761222 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:51:51.761226 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:51:51.761230 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:51:51.761233 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:51:51.761237 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:51:51.761241 | orchestrator | 2026-02-04 01:51:51.761245 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-04 01:51:51.761248 | orchestrator | Wednesday 04 February 2026 01:51:31 +0000 (0:00:01.769) 0:00:06.181 **** 2026-02-04 01:51:51.761253 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-04 01:51:51.761257 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-04 01:51:51.761261 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-04 01:51:51.761265 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-04 01:51:51.761269 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-04 01:51:51.761273 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-04 01:51:51.761276 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-04 01:51:51.761280 | orchestrator | 2026-02-04 01:51:51.761296 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-04 01:51:51.761302 | orchestrator | Wednesday 04 February 2026 01:51:32 +0000 (0:00:01.017) 0:00:07.199 **** 2026-02-04 01:51:51.761308 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:51:51.761315 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 01:51:51.761322 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:51:51.761328 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 01:51:51.761334 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 01:51:51.761340 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:51:51.761347 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 01:51:51.761352 | orchestrator | 2026-02-04 01:51:51.761356 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-04 01:51:51.761360 | orchestrator | Wednesday 04 February 2026 01:51:36 +0000 (0:00:03.771) 0:00:10.970 **** 2026-02-04 01:51:51.761364 | orchestrator | changed: [testbed-manager] 2026-02-04 01:51:51.761368 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:51:51.761372 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:51:51.761376 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:51:51.761382 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:51:51.761386 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:51:51.761390 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:51:51.761394 | orchestrator | 2026-02-04 01:51:51.761397 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-04 01:51:51.761401 | orchestrator | Wednesday 04 February 2026 01:51:38 +0000 (0:00:01.646) 0:00:12.617 **** 2026-02-04 01:51:51.761405 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 01:51:51.761409 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 01:51:51.761413 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 01:51:51.761416 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 01:51:51.761424 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 01:51:51.761428 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 01:51:51.761432 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 01:51:51.761436 | orchestrator | 2026-02-04 01:51:51.761439 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-04 01:51:51.761443 | orchestrator | Wednesday 04 February 2026 01:51:40 +0000 (0:00:01.941) 0:00:14.559 **** 2026-02-04 01:51:51.761447 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:51.761451 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:51:51.761455 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:51:51.761458 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:51:51.761463 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:51:51.761467 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:51:51.761470 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:51:51.761474 | orchestrator | 2026-02-04 01:51:51.761478 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-04 01:51:51.761493 | orchestrator | Wednesday 04 February 2026 01:51:41 +0000 (0:00:01.256) 0:00:15.816 **** 2026-02-04 01:51:51.761497 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:51:51.761501 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:51:51.761504 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:51:51.761508 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:51:51.761512 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:51:51.761516 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:51:51.761519 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:51:51.761523 | orchestrator | 2026-02-04 01:51:51.761527 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-04 01:51:51.761531 | orchestrator | Wednesday 04 February 2026 01:51:42 +0000 (0:00:00.819) 0:00:16.636 **** 2026-02-04 01:51:51.761536 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:51:51.761540 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:51.761545 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:51:51.761549 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:51:51.761554 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:51:51.761558 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:51:51.761563 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:51:51.761567 | orchestrator | 2026-02-04 01:51:51.761572 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-04 01:51:51.761576 | orchestrator | Wednesday 04 February 2026 01:51:44 +0000 (0:00:02.181) 0:00:18.817 **** 2026-02-04 01:51:51.761581 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:51:51.761586 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:51:51.761590 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:51:51.761595 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:51:51.761599 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:51:51.761604 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:51:51.761609 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-04 01:51:51.761614 | orchestrator | 2026-02-04 01:51:51.761619 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-04 01:51:51.761623 | orchestrator | Wednesday 04 February 2026 01:51:45 +0000 (0:00:00.983) 0:00:19.801 **** 2026-02-04 01:51:51.761628 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:51.761632 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:51:51.761637 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:51:51.761641 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:51:51.761646 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:51:51.761650 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:51:51.761654 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:51:51.761659 | orchestrator | 2026-02-04 01:51:51.761663 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-04 01:51:51.761668 | orchestrator | Wednesday 04 February 2026 01:51:47 +0000 (0:00:01.785) 0:00:21.586 **** 2026-02-04 01:51:51.761673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:51:51.761682 | orchestrator | 2026-02-04 01:51:51.761687 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-04 01:51:51.761691 | orchestrator | Wednesday 04 February 2026 01:51:48 +0000 (0:00:01.355) 0:00:22.942 **** 2026-02-04 01:51:51.761696 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:51:51.761701 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:51.761705 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:51:51.761710 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:51:51.761714 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:51:51.761734 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:51:51.761739 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:51:51.761744 | orchestrator | 2026-02-04 01:51:51.761748 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-04 01:51:51.761753 | orchestrator | Wednesday 04 February 2026 01:51:49 +0000 (0:00:01.068) 0:00:24.011 **** 2026-02-04 01:51:51.761757 | orchestrator | ok: [testbed-manager] 2026-02-04 01:51:51.761761 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:51:51.761766 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:51:51.761770 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:51:51.761774 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:51:51.761779 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:51:51.761783 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:51:51.761788 | orchestrator | 2026-02-04 01:51:51.761793 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-04 01:51:51.761797 | orchestrator | Wednesday 04 February 2026 01:51:50 +0000 (0:00:00.921) 0:00:24.933 **** 2026-02-04 01:51:51.761810 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 01:51:51.761817 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 01:51:51.761823 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 01:51:51.761830 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 01:51:51.761836 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 01:51:51.761842 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 01:51:51.761848 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 01:51:51.761852 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 01:51:51.761856 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 01:51:51.761859 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 01:51:51.761863 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 01:51:51.761867 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 01:51:51.761871 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-04 01:51:51.761874 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-04 01:51:51.761878 | orchestrator | 2026-02-04 01:51:51.761887 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-04 01:52:10.231684 | orchestrator | Wednesday 04 February 2026 01:51:51 +0000 (0:00:01.320) 0:00:26.253 **** 2026-02-04 01:52:10.231777 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:52:10.231786 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:52:10.231791 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:52:10.231795 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:52:10.231800 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:52:10.231804 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:52:10.231809 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:52:10.231813 | orchestrator | 2026-02-04 01:52:10.231837 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-04 01:52:10.231842 | orchestrator | Wednesday 04 February 2026 01:51:52 +0000 (0:00:00.682) 0:00:26.936 **** 2026-02-04 01:52:10.231848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-1, testbed-manager, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:52:10.231855 | orchestrator | 2026-02-04 01:52:10.231859 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-04 01:52:10.231863 | orchestrator | Wednesday 04 February 2026 01:51:57 +0000 (0:00:05.226) 0:00:32.162 **** 2026-02-04 01:52:10.231869 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231883 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.231888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.231920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.231929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.231944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.231954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.231958 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.231962 | orchestrator | 2026-02-04 01:52:10.231967 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-04 01:52:10.231971 | orchestrator | Wednesday 04 February 2026 01:52:04 +0000 (0:00:06.598) 0:00:38.760 **** 2026-02-04 01:52:10.231976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231984 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.231997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.232001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-04 01:52:10.232006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.232013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.232018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.232022 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.232031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:10.232042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:17.141816 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-04 01:52:17.141901 | orchestrator | 2026-02-04 01:52:17.141910 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-04 01:52:17.141918 | orchestrator | Wednesday 04 February 2026 01:52:10 +0000 (0:00:05.956) 0:00:44.717 **** 2026-02-04 01:52:17.141926 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:52:17.141932 | orchestrator | 2026-02-04 01:52:17.141937 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-04 01:52:17.141942 | orchestrator | Wednesday 04 February 2026 01:52:11 +0000 (0:00:01.494) 0:00:46.212 **** 2026-02-04 01:52:17.141947 | orchestrator | ok: [testbed-manager] 2026-02-04 01:52:17.141953 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:52:17.141958 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:52:17.141963 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:52:17.141968 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:52:17.141973 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:52:17.141978 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:52:17.141982 | orchestrator | 2026-02-04 01:52:17.141987 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-04 01:52:17.141992 | orchestrator | Wednesday 04 February 2026 01:52:12 +0000 (0:00:01.232) 0:00:47.444 **** 2026-02-04 01:52:17.142000 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 01:52:17.142009 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 01:52:17.142068 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 01:52:17.142077 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 01:52:17.142085 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:52:17.142094 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 01:52:17.142102 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 01:52:17.142110 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 01:52:17.142118 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 01:52:17.142125 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:52:17.142133 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 01:52:17.142141 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 01:52:17.142149 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 01:52:17.142157 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 01:52:17.142187 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:52:17.142195 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 01:52:17.142202 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 01:52:17.142210 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 01:52:17.142218 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 01:52:17.142225 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:52:17.142245 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 01:52:17.142253 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 01:52:17.142260 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 01:52:17.142268 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 01:52:17.142276 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:52:17.142284 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 01:52:17.142291 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 01:52:17.142299 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 01:52:17.142307 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 01:52:17.142315 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:52:17.142323 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-04 01:52:17.142330 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-04 01:52:17.142338 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-04 01:52:17.142346 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-04 01:52:17.142355 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:52:17.142364 | orchestrator | 2026-02-04 01:52:17.142372 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-04 01:52:17.142399 | orchestrator | Wednesday 04 February 2026 01:52:15 +0000 (0:00:02.218) 0:00:49.663 **** 2026-02-04 01:52:17.142407 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:52:17.142415 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:52:17.142423 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:52:17.142431 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:52:17.142439 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:52:17.142446 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:52:17.142454 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:52:17.142461 | orchestrator | 2026-02-04 01:52:17.142469 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-04 01:52:17.142477 | orchestrator | Wednesday 04 February 2026 01:52:15 +0000 (0:00:00.737) 0:00:50.400 **** 2026-02-04 01:52:17.142484 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:52:17.142488 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:52:17.142493 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:52:17.142498 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:52:17.142504 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:52:17.142509 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:52:17.142513 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:52:17.142518 | orchestrator | 2026-02-04 01:52:17.142523 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:52:17.142529 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 01:52:17.142536 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:52:17.142551 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:52:17.142556 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:52:17.142561 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:52:17.142566 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:52:17.142570 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 01:52:17.142575 | orchestrator | 2026-02-04 01:52:17.142580 | orchestrator | 2026-02-04 01:52:17.142585 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:52:17.142590 | orchestrator | Wednesday 04 February 2026 01:52:16 +0000 (0:00:00.772) 0:00:51.173 **** 2026-02-04 01:52:17.142595 | orchestrator | =============================================================================== 2026-02-04 01:52:17.142600 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.60s 2026-02-04 01:52:17.142604 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.96s 2026-02-04 01:52:17.142609 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.23s 2026-02-04 01:52:17.142614 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.77s 2026-02-04 01:52:17.142619 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.22s 2026-02-04 01:52:17.142624 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.18s 2026-02-04 01:52:17.142628 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.95s 2026-02-04 01:52:17.142638 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.94s 2026-02-04 01:52:17.142643 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.79s 2026-02-04 01:52:17.142648 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.77s 2026-02-04 01:52:17.142653 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.65s 2026-02-04 01:52:17.142658 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.49s 2026-02-04 01:52:17.142663 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.36s 2026-02-04 01:52:17.142667 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.35s 2026-02-04 01:52:17.142672 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.32s 2026-02-04 01:52:17.142677 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.26s 2026-02-04 01:52:17.142682 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.23s 2026-02-04 01:52:17.142687 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.07s 2026-02-04 01:52:17.142691 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2026-02-04 01:52:17.142696 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.98s 2026-02-04 01:52:17.525248 | orchestrator | + osism apply wireguard 2026-02-04 01:52:29.735057 | orchestrator | 2026-02-04 01:52:29 | INFO  | Task b574b1eb-3052-4af9-b7fd-0ff50a99d7e6 (wireguard) was prepared for execution. 2026-02-04 01:52:29.735134 | orchestrator | 2026-02-04 01:52:29 | INFO  | It takes a moment until task b574b1eb-3052-4af9-b7fd-0ff50a99d7e6 (wireguard) has been started and output is visible here. 2026-02-04 01:52:52.547497 | orchestrator | 2026-02-04 01:52:52.547619 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-04 01:52:52.547632 | orchestrator | 2026-02-04 01:52:52.547640 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-04 01:52:52.547648 | orchestrator | Wednesday 04 February 2026 01:52:34 +0000 (0:00:00.250) 0:00:00.250 **** 2026-02-04 01:52:52.547655 | orchestrator | ok: [testbed-manager] 2026-02-04 01:52:52.547663 | orchestrator | 2026-02-04 01:52:52.547669 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-04 01:52:52.547676 | orchestrator | Wednesday 04 February 2026 01:52:36 +0000 (0:00:01.865) 0:00:02.115 **** 2026-02-04 01:52:52.547683 | orchestrator | changed: [testbed-manager] 2026-02-04 01:52:52.547695 | orchestrator | 2026-02-04 01:52:52.547702 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-04 01:52:52.547709 | orchestrator | Wednesday 04 February 2026 01:52:44 +0000 (0:00:07.793) 0:00:09.908 **** 2026-02-04 01:52:52.547715 | orchestrator | changed: [testbed-manager] 2026-02-04 01:52:52.547721 | orchestrator | 2026-02-04 01:52:52.547727 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-04 01:52:52.547733 | orchestrator | Wednesday 04 February 2026 01:52:44 +0000 (0:00:00.638) 0:00:10.547 **** 2026-02-04 01:52:52.547740 | orchestrator | changed: [testbed-manager] 2026-02-04 01:52:52.547746 | orchestrator | 2026-02-04 01:52:52.547752 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-04 01:52:52.547758 | orchestrator | Wednesday 04 February 2026 01:52:45 +0000 (0:00:00.459) 0:00:11.006 **** 2026-02-04 01:52:52.547765 | orchestrator | ok: [testbed-manager] 2026-02-04 01:52:52.547822 | orchestrator | 2026-02-04 01:52:52.547830 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-04 01:52:52.547836 | orchestrator | Wednesday 04 February 2026 01:52:45 +0000 (0:00:00.751) 0:00:11.758 **** 2026-02-04 01:52:52.547843 | orchestrator | ok: [testbed-manager] 2026-02-04 01:52:52.547849 | orchestrator | 2026-02-04 01:52:52.547856 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-04 01:52:52.547862 | orchestrator | Wednesday 04 February 2026 01:52:46 +0000 (0:00:00.460) 0:00:12.218 **** 2026-02-04 01:52:52.547868 | orchestrator | ok: [testbed-manager] 2026-02-04 01:52:52.547875 | orchestrator | 2026-02-04 01:52:52.547881 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-04 01:52:52.547888 | orchestrator | Wednesday 04 February 2026 01:52:46 +0000 (0:00:00.431) 0:00:12.650 **** 2026-02-04 01:52:52.547894 | orchestrator | changed: [testbed-manager] 2026-02-04 01:52:52.547900 | orchestrator | 2026-02-04 01:52:52.547907 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-04 01:52:52.547914 | orchestrator | Wednesday 04 February 2026 01:52:48 +0000 (0:00:01.292) 0:00:13.942 **** 2026-02-04 01:52:52.547920 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-04 01:52:52.547927 | orchestrator | changed: [testbed-manager] 2026-02-04 01:52:52.547933 | orchestrator | 2026-02-04 01:52:52.547940 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-04 01:52:52.547946 | orchestrator | Wednesday 04 February 2026 01:52:49 +0000 (0:00:01.005) 0:00:14.948 **** 2026-02-04 01:52:52.547952 | orchestrator | changed: [testbed-manager] 2026-02-04 01:52:52.547959 | orchestrator | 2026-02-04 01:52:52.547966 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-04 01:52:52.547973 | orchestrator | Wednesday 04 February 2026 01:52:51 +0000 (0:00:01.937) 0:00:16.885 **** 2026-02-04 01:52:52.547980 | orchestrator | changed: [testbed-manager] 2026-02-04 01:52:52.547986 | orchestrator | 2026-02-04 01:52:52.547993 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:52:52.548000 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:52:52.548007 | orchestrator | 2026-02-04 01:52:52.548013 | orchestrator | 2026-02-04 01:52:52.548020 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:52:52.548039 | orchestrator | Wednesday 04 February 2026 01:52:52 +0000 (0:00:01.012) 0:00:17.898 **** 2026-02-04 01:52:52.548046 | orchestrator | =============================================================================== 2026-02-04 01:52:52.548053 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.79s 2026-02-04 01:52:52.548061 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.94s 2026-02-04 01:52:52.548069 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.87s 2026-02-04 01:52:52.548076 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.29s 2026-02-04 01:52:52.548083 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.01s 2026-02-04 01:52:52.548091 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.01s 2026-02-04 01:52:52.548098 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.75s 2026-02-04 01:52:52.548105 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.64s 2026-02-04 01:52:52.548112 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-02-04 01:52:52.548119 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2026-02-04 01:52:52.548127 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2026-02-04 01:52:52.902401 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-04 01:52:52.950593 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-04 01:52:52.950662 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-04 01:52:53.034391 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 178 0 --:--:-- --:--:-- --:--:-- 180 2026-02-04 01:52:53.049764 | orchestrator | + osism apply --environment custom workarounds 2026-02-04 01:52:55.202073 | orchestrator | 2026-02-04 01:52:55 | INFO  | Trying to run play workarounds in environment custom 2026-02-04 01:53:05.364344 | orchestrator | 2026-02-04 01:53:05 | INFO  | Task 022a73e8-5108-4588-8cb2-b7d12cb8830f (workarounds) was prepared for execution. 2026-02-04 01:53:05.364455 | orchestrator | 2026-02-04 01:53:05 | INFO  | It takes a moment until task 022a73e8-5108-4588-8cb2-b7d12cb8830f (workarounds) has been started and output is visible here. 2026-02-04 01:53:33.276474 | orchestrator | 2026-02-04 01:53:33.276584 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 01:53:33.276602 | orchestrator | 2026-02-04 01:53:33.276615 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-04 01:53:33.276626 | orchestrator | Wednesday 04 February 2026 01:53:10 +0000 (0:00:00.145) 0:00:00.145 **** 2026-02-04 01:53:33.276638 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-04 01:53:33.276649 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-04 01:53:33.276660 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-04 01:53:33.276671 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-04 01:53:33.276682 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-04 01:53:33.276693 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-04 01:53:33.276704 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-04 01:53:33.276715 | orchestrator | 2026-02-04 01:53:33.276726 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-04 01:53:33.276736 | orchestrator | 2026-02-04 01:53:33.276747 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-04 01:53:33.276758 | orchestrator | Wednesday 04 February 2026 01:53:11 +0000 (0:00:00.896) 0:00:01.042 **** 2026-02-04 01:53:33.276769 | orchestrator | ok: [testbed-manager] 2026-02-04 01:53:33.276860 | orchestrator | 2026-02-04 01:53:33.276876 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-04 01:53:33.276886 | orchestrator | 2026-02-04 01:53:33.276898 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-04 01:53:33.276909 | orchestrator | Wednesday 04 February 2026 01:53:13 +0000 (0:00:02.750) 0:00:03.792 **** 2026-02-04 01:53:33.276920 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:53:33.276931 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:53:33.276942 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:53:33.276952 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:53:33.276966 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:53:33.276985 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:53:33.277002 | orchestrator | 2026-02-04 01:53:33.277033 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-04 01:53:33.277051 | orchestrator | 2026-02-04 01:53:33.277069 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-04 01:53:33.277088 | orchestrator | Wednesday 04 February 2026 01:53:15 +0000 (0:00:01.815) 0:00:05.608 **** 2026-02-04 01:53:33.277107 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 01:53:33.277127 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 01:53:33.277145 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 01:53:33.277164 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 01:53:33.277182 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 01:53:33.277223 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-04 01:53:33.277242 | orchestrator | 2026-02-04 01:53:33.277263 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-04 01:53:33.277282 | orchestrator | Wednesday 04 February 2026 01:53:17 +0000 (0:00:01.593) 0:00:07.201 **** 2026-02-04 01:53:33.277302 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:53:33.277324 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:53:33.277345 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:53:33.277365 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:53:33.277385 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:53:33.277406 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:53:33.277423 | orchestrator | 2026-02-04 01:53:33.277442 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-04 01:53:33.277461 | orchestrator | Wednesday 04 February 2026 01:53:21 +0000 (0:00:03.685) 0:00:10.886 **** 2026-02-04 01:53:33.277479 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:53:33.277498 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:53:33.277515 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:53:33.277534 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:53:33.277552 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:53:33.277570 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:53:33.277589 | orchestrator | 2026-02-04 01:53:33.277607 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-04 01:53:33.277619 | orchestrator | 2026-02-04 01:53:33.277630 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-04 01:53:33.277640 | orchestrator | Wednesday 04 February 2026 01:53:21 +0000 (0:00:00.797) 0:00:11.683 **** 2026-02-04 01:53:33.277651 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:53:33.277662 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:53:33.277673 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:53:33.277683 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:53:33.277694 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:53:33.277705 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:53:33.277731 | orchestrator | changed: [testbed-manager] 2026-02-04 01:53:33.277742 | orchestrator | 2026-02-04 01:53:33.277752 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-04 01:53:33.277763 | orchestrator | Wednesday 04 February 2026 01:53:23 +0000 (0:00:01.831) 0:00:13.515 **** 2026-02-04 01:53:33.277774 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:53:33.277784 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:53:33.277795 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:53:33.277832 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:53:33.277844 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:53:33.277854 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:53:33.277889 | orchestrator | changed: [testbed-manager] 2026-02-04 01:53:33.277901 | orchestrator | 2026-02-04 01:53:33.277912 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-04 01:53:33.277923 | orchestrator | Wednesday 04 February 2026 01:53:25 +0000 (0:00:01.758) 0:00:15.274 **** 2026-02-04 01:53:33.277934 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:53:33.277944 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:53:33.277955 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:53:33.277966 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:53:33.277976 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:53:33.277987 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:53:33.277998 | orchestrator | ok: [testbed-manager] 2026-02-04 01:53:33.278008 | orchestrator | 2026-02-04 01:53:33.278081 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-04 01:53:33.278093 | orchestrator | Wednesday 04 February 2026 01:53:27 +0000 (0:00:01.711) 0:00:16.985 **** 2026-02-04 01:53:33.278104 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:53:33.278114 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:53:33.278125 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:53:33.278136 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:53:33.278147 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:53:33.278157 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:53:33.278168 | orchestrator | changed: [testbed-manager] 2026-02-04 01:53:33.278179 | orchestrator | 2026-02-04 01:53:33.278190 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-04 01:53:33.278200 | orchestrator | Wednesday 04 February 2026 01:53:29 +0000 (0:00:01.978) 0:00:18.964 **** 2026-02-04 01:53:33.278211 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:53:33.278222 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:53:33.278234 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:53:33.278253 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:53:33.278272 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:53:33.278292 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:53:33.278310 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:53:33.278329 | orchestrator | 2026-02-04 01:53:33.278348 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-04 01:53:33.278368 | orchestrator | 2026-02-04 01:53:33.278388 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-04 01:53:33.278408 | orchestrator | Wednesday 04 February 2026 01:53:29 +0000 (0:00:00.817) 0:00:19.781 **** 2026-02-04 01:53:33.278419 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:53:33.278430 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:53:33.278441 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:53:33.278452 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:53:33.278463 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:53:33.278473 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:53:33.278484 | orchestrator | ok: [testbed-manager] 2026-02-04 01:53:33.278495 | orchestrator | 2026-02-04 01:53:33.278506 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:53:33.278518 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:53:33.278531 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:33.278552 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:33.278572 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:33.278584 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:33.278595 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:33.278606 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:33.278617 | orchestrator | 2026-02-04 01:53:33.278628 | orchestrator | 2026-02-04 01:53:33.278638 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:53:33.278649 | orchestrator | Wednesday 04 February 2026 01:53:33 +0000 (0:00:03.328) 0:00:23.110 **** 2026-02-04 01:53:33.278660 | orchestrator | =============================================================================== 2026-02-04 01:53:33.278671 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.69s 2026-02-04 01:53:33.278682 | orchestrator | Install python3-docker -------------------------------------------------- 3.33s 2026-02-04 01:53:33.278693 | orchestrator | Apply netplan configuration --------------------------------------------- 2.75s 2026-02-04 01:53:33.278704 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.98s 2026-02-04 01:53:33.278715 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.83s 2026-02-04 01:53:33.278726 | orchestrator | Apply netplan configuration --------------------------------------------- 1.82s 2026-02-04 01:53:33.278736 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.76s 2026-02-04 01:53:33.278747 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.71s 2026-02-04 01:53:33.278758 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.59s 2026-02-04 01:53:33.278769 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.90s 2026-02-04 01:53:33.278780 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.82s 2026-02-04 01:53:33.278801 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.80s 2026-02-04 01:53:34.136068 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-04 01:53:46.571494 | orchestrator | 2026-02-04 01:53:46 | INFO  | Task 4622a838-fbc0-4f0f-ab8b-ad663a249851 (reboot) was prepared for execution. 2026-02-04 01:53:46.571559 | orchestrator | 2026-02-04 01:53:46 | INFO  | It takes a moment until task 4622a838-fbc0-4f0f-ab8b-ad663a249851 (reboot) has been started and output is visible here. 2026-02-04 01:53:57.321412 | orchestrator | 2026-02-04 01:53:57.321544 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 01:53:57.321567 | orchestrator | 2026-02-04 01:53:57.321583 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 01:53:57.321600 | orchestrator | Wednesday 04 February 2026 01:53:51 +0000 (0:00:00.217) 0:00:00.217 **** 2026-02-04 01:53:57.321617 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:53:57.321636 | orchestrator | 2026-02-04 01:53:57.321652 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 01:53:57.321669 | orchestrator | Wednesday 04 February 2026 01:53:51 +0000 (0:00:00.122) 0:00:00.340 **** 2026-02-04 01:53:57.321684 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:53:57.321701 | orchestrator | 2026-02-04 01:53:57.321717 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 01:53:57.321752 | orchestrator | Wednesday 04 February 2026 01:53:52 +0000 (0:00:00.943) 0:00:01.283 **** 2026-02-04 01:53:57.321763 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:53:57.321780 | orchestrator | 2026-02-04 01:53:57.321799 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 01:53:57.321818 | orchestrator | 2026-02-04 01:53:57.321857 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 01:53:57.321869 | orchestrator | Wednesday 04 February 2026 01:53:52 +0000 (0:00:00.136) 0:00:01.420 **** 2026-02-04 01:53:57.321883 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:53:57.321901 | orchestrator | 2026-02-04 01:53:57.321921 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 01:53:57.321940 | orchestrator | Wednesday 04 February 2026 01:53:52 +0000 (0:00:00.122) 0:00:01.543 **** 2026-02-04 01:53:57.321965 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:53:57.321991 | orchestrator | 2026-02-04 01:53:57.322097 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 01:53:57.322126 | orchestrator | Wednesday 04 February 2026 01:53:53 +0000 (0:00:00.678) 0:00:02.221 **** 2026-02-04 01:53:57.322152 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:53:57.322176 | orchestrator | 2026-02-04 01:53:57.322201 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 01:53:57.322225 | orchestrator | 2026-02-04 01:53:57.322250 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 01:53:57.322274 | orchestrator | Wednesday 04 February 2026 01:53:53 +0000 (0:00:00.121) 0:00:02.343 **** 2026-02-04 01:53:57.322298 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:53:57.322325 | orchestrator | 2026-02-04 01:53:57.322352 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 01:53:57.322377 | orchestrator | Wednesday 04 February 2026 01:53:53 +0000 (0:00:00.247) 0:00:02.590 **** 2026-02-04 01:53:57.322401 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:53:57.322426 | orchestrator | 2026-02-04 01:53:57.322475 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 01:53:57.322504 | orchestrator | Wednesday 04 February 2026 01:53:54 +0000 (0:00:00.667) 0:00:03.258 **** 2026-02-04 01:53:57.322527 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:53:57.322538 | orchestrator | 2026-02-04 01:53:57.322550 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 01:53:57.322560 | orchestrator | 2026-02-04 01:53:57.322571 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 01:53:57.322582 | orchestrator | Wednesday 04 February 2026 01:53:54 +0000 (0:00:00.127) 0:00:03.386 **** 2026-02-04 01:53:57.322593 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:53:57.322604 | orchestrator | 2026-02-04 01:53:57.322616 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 01:53:57.322626 | orchestrator | Wednesday 04 February 2026 01:53:54 +0000 (0:00:00.128) 0:00:03.514 **** 2026-02-04 01:53:57.322637 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:53:57.322648 | orchestrator | 2026-02-04 01:53:57.322658 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 01:53:57.322668 | orchestrator | Wednesday 04 February 2026 01:53:55 +0000 (0:00:00.718) 0:00:04.233 **** 2026-02-04 01:53:57.322679 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:53:57.322691 | orchestrator | 2026-02-04 01:53:57.322701 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 01:53:57.322714 | orchestrator | 2026-02-04 01:53:57.322724 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 01:53:57.322735 | orchestrator | Wednesday 04 February 2026 01:53:55 +0000 (0:00:00.104) 0:00:04.338 **** 2026-02-04 01:53:57.322746 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:53:57.322756 | orchestrator | 2026-02-04 01:53:57.322768 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 01:53:57.322798 | orchestrator | Wednesday 04 February 2026 01:53:55 +0000 (0:00:00.110) 0:00:04.448 **** 2026-02-04 01:53:57.322810 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:53:57.322885 | orchestrator | 2026-02-04 01:53:57.322909 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 01:53:57.322929 | orchestrator | Wednesday 04 February 2026 01:53:56 +0000 (0:00:00.645) 0:00:05.094 **** 2026-02-04 01:53:57.322948 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:53:57.322968 | orchestrator | 2026-02-04 01:53:57.322987 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-04 01:53:57.323006 | orchestrator | 2026-02-04 01:53:57.323026 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-04 01:53:57.323046 | orchestrator | Wednesday 04 February 2026 01:53:56 +0000 (0:00:00.109) 0:00:05.203 **** 2026-02-04 01:53:57.323065 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:53:57.323084 | orchestrator | 2026-02-04 01:53:57.323103 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-04 01:53:57.323122 | orchestrator | Wednesday 04 February 2026 01:53:56 +0000 (0:00:00.102) 0:00:05.306 **** 2026-02-04 01:53:57.323140 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:53:57.323158 | orchestrator | 2026-02-04 01:53:57.323177 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-04 01:53:57.323198 | orchestrator | Wednesday 04 February 2026 01:53:57 +0000 (0:00:00.653) 0:00:05.960 **** 2026-02-04 01:53:57.323249 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:53:57.323269 | orchestrator | 2026-02-04 01:53:57.323290 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:53:57.323312 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:57.323334 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:57.323354 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:57.323374 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:57.323395 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:57.323415 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:53:57.323436 | orchestrator | 2026-02-04 01:53:57.323456 | orchestrator | 2026-02-04 01:53:57.323473 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:53:57.323484 | orchestrator | Wednesday 04 February 2026 01:53:57 +0000 (0:00:00.029) 0:00:05.989 **** 2026-02-04 01:53:57.323494 | orchestrator | =============================================================================== 2026-02-04 01:53:57.323504 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.31s 2026-02-04 01:53:57.323526 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.83s 2026-02-04 01:53:57.323550 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2026-02-04 01:53:57.562641 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-04 01:54:09.758259 | orchestrator | 2026-02-04 01:54:09 | INFO  | Task 59f5e867-3444-4ec6-a7a4-b669e106c806 (wait-for-connection) was prepared for execution. 2026-02-04 01:54:09.758415 | orchestrator | 2026-02-04 01:54:09 | INFO  | It takes a moment until task 59f5e867-3444-4ec6-a7a4-b669e106c806 (wait-for-connection) has been started and output is visible here. 2026-02-04 01:54:26.397236 | orchestrator | 2026-02-04 01:54:26.397347 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-04 01:54:26.397365 | orchestrator | 2026-02-04 01:54:26.397378 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-04 01:54:26.397391 | orchestrator | Wednesday 04 February 2026 01:54:14 +0000 (0:00:00.243) 0:00:00.243 **** 2026-02-04 01:54:26.397404 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:54:26.397417 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:54:26.397429 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:54:26.397441 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:54:26.397453 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:54:26.397465 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:54:26.397477 | orchestrator | 2026-02-04 01:54:26.397489 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:54:26.397516 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:54:26.397530 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:54:26.397542 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:54:26.397553 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:54:26.397564 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:54:26.397575 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:54:26.397587 | orchestrator | 2026-02-04 01:54:26.397598 | orchestrator | 2026-02-04 01:54:26.397609 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:54:26.397620 | orchestrator | Wednesday 04 February 2026 01:54:25 +0000 (0:00:11.600) 0:00:11.844 **** 2026-02-04 01:54:26.397641 | orchestrator | =============================================================================== 2026-02-04 01:54:26.397653 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.60s 2026-02-04 01:54:26.761891 | orchestrator | + osism apply hddtemp 2026-02-04 01:54:39.001563 | orchestrator | 2026-02-04 01:54:38 | INFO  | Task eead946e-fdab-456e-a67f-7961b6094f6e (hddtemp) was prepared for execution. 2026-02-04 01:54:39.001674 | orchestrator | 2026-02-04 01:54:39 | INFO  | It takes a moment until task eead946e-fdab-456e-a67f-7961b6094f6e (hddtemp) has been started and output is visible here. 2026-02-04 01:55:07.371055 | orchestrator | 2026-02-04 01:55:07.371149 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-04 01:55:07.371167 | orchestrator | 2026-02-04 01:55:07.371180 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-04 01:55:07.371190 | orchestrator | Wednesday 04 February 2026 01:54:43 +0000 (0:00:00.275) 0:00:00.275 **** 2026-02-04 01:55:07.371197 | orchestrator | ok: [testbed-manager] 2026-02-04 01:55:07.371206 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:55:07.371213 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:55:07.371220 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:55:07.371227 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:55:07.371234 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:55:07.371241 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:55:07.371248 | orchestrator | 2026-02-04 01:55:07.371255 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-04 01:55:07.371263 | orchestrator | Wednesday 04 February 2026 01:54:44 +0000 (0:00:00.788) 0:00:01.064 **** 2026-02-04 01:55:07.371272 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:55:07.371303 | orchestrator | 2026-02-04 01:55:07.371310 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-04 01:55:07.371317 | orchestrator | Wednesday 04 February 2026 01:54:45 +0000 (0:00:01.284) 0:00:02.348 **** 2026-02-04 01:55:07.371324 | orchestrator | ok: [testbed-manager] 2026-02-04 01:55:07.371331 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:55:07.371338 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:55:07.371345 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:55:07.371352 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:55:07.371359 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:55:07.371366 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:55:07.371372 | orchestrator | 2026-02-04 01:55:07.371379 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-04 01:55:07.371388 | orchestrator | Wednesday 04 February 2026 01:54:47 +0000 (0:00:02.025) 0:00:04.374 **** 2026-02-04 01:55:07.371400 | orchestrator | changed: [testbed-manager] 2026-02-04 01:55:07.371412 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:55:07.371427 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:55:07.371442 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:55:07.371453 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:55:07.371464 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:55:07.371475 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:55:07.371487 | orchestrator | 2026-02-04 01:55:07.371497 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-04 01:55:07.371508 | orchestrator | Wednesday 04 February 2026 01:54:49 +0000 (0:00:01.238) 0:00:05.613 **** 2026-02-04 01:55:07.371519 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:55:07.371531 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:55:07.371540 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:55:07.371547 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:55:07.371553 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:55:07.371575 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:55:07.371586 | orchestrator | ok: [testbed-manager] 2026-02-04 01:55:07.371596 | orchestrator | 2026-02-04 01:55:07.371608 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-04 01:55:07.371619 | orchestrator | Wednesday 04 February 2026 01:54:50 +0000 (0:00:01.212) 0:00:06.825 **** 2026-02-04 01:55:07.371631 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:55:07.371642 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:55:07.371653 | orchestrator | changed: [testbed-manager] 2026-02-04 01:55:07.371665 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:55:07.371690 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:55:07.371703 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:55:07.371715 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:55:07.371725 | orchestrator | 2026-02-04 01:55:07.371733 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-04 01:55:07.371740 | orchestrator | Wednesday 04 February 2026 01:54:51 +0000 (0:00:00.950) 0:00:07.775 **** 2026-02-04 01:55:07.371747 | orchestrator | changed: [testbed-manager] 2026-02-04 01:55:07.371753 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:55:07.371760 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:55:07.371767 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:55:07.371774 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:55:07.371780 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:55:07.371787 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:55:07.371794 | orchestrator | 2026-02-04 01:55:07.371801 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-04 01:55:07.371807 | orchestrator | Wednesday 04 February 2026 01:55:03 +0000 (0:00:12.270) 0:00:20.046 **** 2026-02-04 01:55:07.371814 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 01:55:07.371831 | orchestrator | 2026-02-04 01:55:07.371838 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-04 01:55:07.371847 | orchestrator | Wednesday 04 February 2026 01:55:04 +0000 (0:00:01.412) 0:00:21.459 **** 2026-02-04 01:55:07.371858 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:55:07.371892 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:55:07.371904 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:55:07.371916 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:55:07.371927 | orchestrator | changed: [testbed-manager] 2026-02-04 01:55:07.371938 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:55:07.371950 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:55:07.371962 | orchestrator | 2026-02-04 01:55:07.371974 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:55:07.371985 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 01:55:07.372019 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:55:07.372027 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:55:07.372034 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:55:07.372041 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:55:07.372048 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:55:07.372054 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:55:07.372061 | orchestrator | 2026-02-04 01:55:07.372068 | orchestrator | 2026-02-04 01:55:07.372074 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:55:07.372081 | orchestrator | Wednesday 04 February 2026 01:55:06 +0000 (0:00:01.954) 0:00:23.413 **** 2026-02-04 01:55:07.372088 | orchestrator | =============================================================================== 2026-02-04 01:55:07.372095 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.27s 2026-02-04 01:55:07.372101 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.03s 2026-02-04 01:55:07.372108 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2026-02-04 01:55:07.372115 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.41s 2026-02-04 01:55:07.372121 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.28s 2026-02-04 01:55:07.372128 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.24s 2026-02-04 01:55:07.372135 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.21s 2026-02-04 01:55:07.372141 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.95s 2026-02-04 01:55:07.372148 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.79s 2026-02-04 01:55:07.745032 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-04 01:55:07.816604 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 01:55:07.816703 | orchestrator | + sudo systemctl restart manager.service 2026-02-04 01:55:21.515601 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 01:55:21.515712 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-04 01:55:21.515750 | orchestrator | + local max_attempts=60 2026-02-04 01:55:21.515764 | orchestrator | + local name=ceph-ansible 2026-02-04 01:55:21.515780 | orchestrator | + local attempt_num=1 2026-02-04 01:55:21.515797 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:55:21.556141 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:55:21.556259 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:55:21.556284 | orchestrator | + sleep 5 2026-02-04 01:55:26.558815 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:55:26.611665 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:55:26.611953 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:55:26.611972 | orchestrator | + sleep 5 2026-02-04 01:55:31.614707 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:55:31.646279 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:55:31.646366 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:55:31.646379 | orchestrator | + sleep 5 2026-02-04 01:55:36.650323 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:55:36.687285 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:55:36.687412 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:55:36.687512 | orchestrator | + sleep 5 2026-02-04 01:55:41.691162 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:55:41.726998 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:55:41.727093 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:55:41.727107 | orchestrator | + sleep 5 2026-02-04 01:55:46.731739 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:55:46.769112 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:55:46.769204 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:55:46.769214 | orchestrator | + sleep 5 2026-02-04 01:55:51.774374 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:55:51.814125 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:55:51.814257 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:55:51.814276 | orchestrator | + sleep 5 2026-02-04 01:55:56.820008 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:55:56.875559 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 01:55:56.875664 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:55:56.875681 | orchestrator | + sleep 5 2026-02-04 01:56:01.877268 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:56:01.933502 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 01:56:01.933610 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:56:01.933625 | orchestrator | + sleep 5 2026-02-04 01:56:06.934963 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:56:06.971452 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 01:56:06.971530 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:56:06.971541 | orchestrator | + sleep 5 2026-02-04 01:56:11.974442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:56:12.022774 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 01:56:12.022881 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:56:12.022933 | orchestrator | + sleep 5 2026-02-04 01:56:17.027501 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:56:17.058931 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 01:56:17.086195 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:56:17.086370 | orchestrator | + sleep 5 2026-02-04 01:56:22.062951 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:56:22.094406 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-04 01:56:22.094492 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-04 01:56:22.094508 | orchestrator | + sleep 5 2026-02-04 01:56:27.098845 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-04 01:56:27.134828 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:56:27.135009 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-04 01:56:27.135029 | orchestrator | + local max_attempts=60 2026-02-04 01:56:27.135042 | orchestrator | + local name=kolla-ansible 2026-02-04 01:56:27.135053 | orchestrator | + local attempt_num=1 2026-02-04 01:56:27.135064 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-04 01:56:27.168366 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:56:27.168451 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-04 01:56:27.168496 | orchestrator | + local max_attempts=60 2026-02-04 01:56:27.168513 | orchestrator | + local name=osism-ansible 2026-02-04 01:56:27.168527 | orchestrator | + local attempt_num=1 2026-02-04 01:56:27.169180 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-04 01:56:27.200867 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 01:56:27.201196 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-04 01:56:27.201226 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-04 01:56:27.380841 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-04 01:56:27.558174 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-04 01:56:27.736751 | orchestrator | ARA in osism-ansible already disabled. 2026-02-04 01:56:27.888412 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-04 01:56:27.888733 | orchestrator | + osism apply gather-facts 2026-02-04 01:56:40.264698 | orchestrator | 2026-02-04 01:56:40 | INFO  | Task 57c4d69b-ce66-4cfb-a916-6087df53892b (gather-facts) was prepared for execution. 2026-02-04 01:56:40.264785 | orchestrator | 2026-02-04 01:56:40 | INFO  | It takes a moment until task 57c4d69b-ce66-4cfb-a916-6087df53892b (gather-facts) has been started and output is visible here. 2026-02-04 01:56:54.156056 | orchestrator | 2026-02-04 01:56:54.156186 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 01:56:54.156205 | orchestrator | 2026-02-04 01:56:54.156218 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 01:56:54.156230 | orchestrator | Wednesday 04 February 2026 01:56:45 +0000 (0:00:00.239) 0:00:00.239 **** 2026-02-04 01:56:54.156243 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:56:54.156256 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:56:54.156266 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:56:54.156277 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:56:54.156288 | orchestrator | ok: [testbed-manager] 2026-02-04 01:56:54.156299 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:56:54.156310 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:56:54.156321 | orchestrator | 2026-02-04 01:56:54.156332 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 01:56:54.156343 | orchestrator | 2026-02-04 01:56:54.156355 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 01:56:54.156366 | orchestrator | Wednesday 04 February 2026 01:56:53 +0000 (0:00:07.969) 0:00:08.209 **** 2026-02-04 01:56:54.156377 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:56:54.156389 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:56:54.156400 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:56:54.156411 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:56:54.156422 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:56:54.156432 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:56:54.156443 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:56:54.156454 | orchestrator | 2026-02-04 01:56:54.156465 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:56:54.156477 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:56:54.156489 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:56:54.156500 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:56:54.156512 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:56:54.156522 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:56:54.156534 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:56:54.156574 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 01:56:54.156586 | orchestrator | 2026-02-04 01:56:54.156597 | orchestrator | 2026-02-04 01:56:54.156608 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:56:54.156619 | orchestrator | Wednesday 04 February 2026 01:56:53 +0000 (0:00:00.668) 0:00:08.877 **** 2026-02-04 01:56:54.156630 | orchestrator | =============================================================================== 2026-02-04 01:56:54.156641 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.97s 2026-02-04 01:56:54.156652 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.67s 2026-02-04 01:56:54.527459 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-04 01:56:54.551063 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-04 01:56:54.569028 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-04 01:56:54.594496 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-04 01:56:54.608740 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-04 01:56:54.622669 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-04 01:56:54.636321 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-04 01:56:54.658577 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-04 01:56:54.675242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-04 01:56:54.689440 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-04 01:56:54.708359 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-04 01:56:54.725855 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-04 01:56:54.747577 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-04 01:56:54.762612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-04 01:56:54.776801 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-04 01:56:54.792128 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-04 01:56:54.807714 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-04 01:56:54.824166 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-04 01:56:54.839804 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-04 01:56:54.854446 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-04 01:56:54.869497 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-04 01:56:54.882840 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-04 01:56:54.895859 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-04 01:56:54.908868 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-04 01:56:55.012661 | orchestrator | ok: Runtime: 0:25:02.839593 2026-02-04 01:56:55.108439 | 2026-02-04 01:56:55.108582 | TASK [Deploy services] 2026-02-04 01:56:55.819566 | orchestrator | 2026-02-04 01:56:55.819651 | orchestrator | # DEPLOY SERVICES 2026-02-04 01:56:55.819661 | orchestrator | 2026-02-04 01:56:55.819666 | orchestrator | + set -e 2026-02-04 01:56:55.819670 | orchestrator | + echo 2026-02-04 01:56:55.819676 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-04 01:56:55.819682 | orchestrator | + echo 2026-02-04 01:56:55.819701 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 01:56:55.819710 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 01:56:55.819716 | orchestrator | ++ INTERACTIVE=false 2026-02-04 01:56:55.819721 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 01:56:55.819730 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 01:56:55.819734 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 01:56:55.819740 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 01:56:55.819744 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 01:56:55.819751 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 01:56:55.819754 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 01:56:55.819760 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 01:56:55.819764 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 01:56:55.819770 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 01:56:55.819773 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 01:56:55.819777 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 01:56:55.819782 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 01:56:55.819786 | orchestrator | ++ export ARA=false 2026-02-04 01:56:55.819790 | orchestrator | ++ ARA=false 2026-02-04 01:56:55.819794 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 01:56:55.819798 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 01:56:55.819801 | orchestrator | ++ export TEMPEST=false 2026-02-04 01:56:55.819805 | orchestrator | ++ TEMPEST=false 2026-02-04 01:56:55.819809 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 01:56:55.819813 | orchestrator | ++ IS_ZUUL=true 2026-02-04 01:56:55.819816 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:56:55.819821 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:56:55.819824 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 01:56:55.819828 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 01:56:55.819832 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 01:56:55.819836 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 01:56:55.819839 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 01:56:55.819843 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 01:56:55.819847 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 01:56:55.819854 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 01:56:55.819858 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-04 01:56:55.825926 | orchestrator | + set -e 2026-02-04 01:56:55.825970 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 01:56:55.825978 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 01:56:55.825983 | orchestrator | ++ INTERACTIVE=false 2026-02-04 01:56:55.825987 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 01:56:55.825991 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 01:56:55.825995 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 01:56:55.825999 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 01:56:55.826003 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 01:56:55.826006 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 01:56:55.826010 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 01:56:55.826031 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 01:56:55.826036 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 01:56:55.826040 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 01:56:55.826043 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 01:56:55.826048 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 01:56:55.826051 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 01:56:55.826055 | orchestrator | ++ export ARA=false 2026-02-04 01:56:55.826059 | orchestrator | ++ ARA=false 2026-02-04 01:56:55.826063 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 01:56:55.826067 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 01:56:55.826071 | orchestrator | ++ export TEMPEST=false 2026-02-04 01:56:55.826077 | orchestrator | ++ TEMPEST=false 2026-02-04 01:56:55.826080 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 01:56:55.826084 | orchestrator | ++ IS_ZUUL=true 2026-02-04 01:56:55.826088 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:56:55.826092 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:56:55.826096 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 01:56:55.826100 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 01:56:55.826104 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 01:56:55.826107 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 01:56:55.826117 | orchestrator | 2026-02-04 01:56:55.826122 | orchestrator | # PULL IMAGES 2026-02-04 01:56:55.826125 | orchestrator | 2026-02-04 01:56:55.826143 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 01:56:55.826147 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 01:56:55.826151 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 01:56:55.826155 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 01:56:55.826159 | orchestrator | + echo 2026-02-04 01:56:55.826163 | orchestrator | + echo '# PULL IMAGES' 2026-02-04 01:56:55.826167 | orchestrator | + echo 2026-02-04 01:56:55.827046 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-04 01:56:55.880623 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 01:56:55.880705 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-04 01:56:58.042544 | orchestrator | 2026-02-04 01:56:58 | INFO  | Trying to run play pull-images in environment custom 2026-02-04 01:57:08.203378 | orchestrator | 2026-02-04 01:57:08 | INFO  | Task f8ec95b5-c260-4e9f-a114-a05660545707 (pull-images) was prepared for execution. 2026-02-04 01:57:08.203509 | orchestrator | 2026-02-04 01:57:08 | INFO  | Task f8ec95b5-c260-4e9f-a114-a05660545707 is running in background. No more output. Check ARA for logs. 2026-02-04 01:57:08.603534 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-04 01:57:20.786668 | orchestrator | 2026-02-04 01:57:20 | INFO  | Task 9c5e0c76-fa27-4804-a34c-cae6da3d3f6c (cgit) was prepared for execution. 2026-02-04 01:57:20.786795 | orchestrator | 2026-02-04 01:57:20 | INFO  | Task 9c5e0c76-fa27-4804-a34c-cae6da3d3f6c is running in background. No more output. Check ARA for logs. 2026-02-04 01:57:33.787280 | orchestrator | 2026-02-04 01:57:33 | INFO  | Task f0f05275-80d2-411a-bdc0-8192b7c65c49 (dotfiles) was prepared for execution. 2026-02-04 01:57:33.787393 | orchestrator | 2026-02-04 01:57:33 | INFO  | Task f0f05275-80d2-411a-bdc0-8192b7c65c49 is running in background. No more output. Check ARA for logs. 2026-02-04 01:57:46.491809 | orchestrator | 2026-02-04 01:57:46 | INFO  | Task 1d475a12-6cc5-4905-91cc-d9c83f1c7d09 (homer) was prepared for execution. 2026-02-04 01:57:46.491886 | orchestrator | 2026-02-04 01:57:46 | INFO  | Task 1d475a12-6cc5-4905-91cc-d9c83f1c7d09 is running in background. No more output. Check ARA for logs. 2026-02-04 01:57:59.469553 | orchestrator | 2026-02-04 01:57:59 | INFO  | Task fdbc6eda-cf4c-4ee8-a6f0-caeb14d3a21a (phpmyadmin) was prepared for execution. 2026-02-04 01:57:59.469641 | orchestrator | 2026-02-04 01:57:59 | INFO  | Task fdbc6eda-cf4c-4ee8-a6f0-caeb14d3a21a is running in background. No more output. Check ARA for logs. 2026-02-04 01:58:12.670569 | orchestrator | 2026-02-04 01:58:12 | INFO  | Task f7526146-f834-40b0-a5d5-a7e088ebf3de (sosreport) was prepared for execution. 2026-02-04 01:58:12.670661 | orchestrator | 2026-02-04 01:58:12 | INFO  | Task f7526146-f834-40b0-a5d5-a7e088ebf3de is running in background. No more output. Check ARA for logs. 2026-02-04 01:58:13.079408 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-04 01:58:13.085725 | orchestrator | + set -e 2026-02-04 01:58:13.085791 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 01:58:13.085797 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 01:58:13.085803 | orchestrator | ++ INTERACTIVE=false 2026-02-04 01:58:13.085809 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 01:58:13.085813 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 01:58:13.085818 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 01:58:13.085822 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 01:58:13.085826 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 01:58:13.085830 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 01:58:13.085834 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 01:58:13.085839 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 01:58:13.085843 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 01:58:13.085847 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 01:58:13.085851 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 01:58:13.085855 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 01:58:13.085859 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 01:58:13.085863 | orchestrator | ++ export ARA=false 2026-02-04 01:58:13.085867 | orchestrator | ++ ARA=false 2026-02-04 01:58:13.085874 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 01:58:13.085904 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 01:58:13.085911 | orchestrator | ++ export TEMPEST=false 2026-02-04 01:58:13.085917 | orchestrator | ++ TEMPEST=false 2026-02-04 01:58:13.085922 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 01:58:13.085928 | orchestrator | ++ IS_ZUUL=true 2026-02-04 01:58:13.086065 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:58:13.086079 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 01:58:13.086087 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 01:58:13.086092 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 01:58:13.086099 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 01:58:13.086109 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 01:58:13.086118 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 01:58:13.086124 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 01:58:13.086130 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 01:58:13.086135 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 01:58:13.087051 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-04 01:58:13.139829 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 01:58:13.139930 | orchestrator | + osism apply frr 2026-02-04 01:58:25.688897 | orchestrator | 2026-02-04 01:58:25 | INFO  | Task 0d638e31-8888-4c6e-81a4-b359580052ad (frr) was prepared for execution. 2026-02-04 01:58:25.689022 | orchestrator | 2026-02-04 01:58:25 | INFO  | It takes a moment until task 0d638e31-8888-4c6e-81a4-b359580052ad (frr) has been started and output is visible here. 2026-02-04 01:59:07.070639 | orchestrator | 2026-02-04 01:59:07.070742 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-04 01:59:07.070753 | orchestrator | 2026-02-04 01:59:07.070777 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-04 01:59:07.070790 | orchestrator | Wednesday 04 February 2026 01:58:33 +0000 (0:00:00.810) 0:00:00.810 **** 2026-02-04 01:59:07.070798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 01:59:07.070806 | orchestrator | 2026-02-04 01:59:07.070813 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-04 01:59:07.070819 | orchestrator | Wednesday 04 February 2026 01:58:34 +0000 (0:00:00.661) 0:00:01.471 **** 2026-02-04 01:59:07.070826 | orchestrator | changed: [testbed-manager] 2026-02-04 01:59:07.070834 | orchestrator | 2026-02-04 01:59:07.070841 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-04 01:59:07.070849 | orchestrator | Wednesday 04 February 2026 01:58:36 +0000 (0:00:02.693) 0:00:04.164 **** 2026-02-04 01:59:07.070855 | orchestrator | changed: [testbed-manager] 2026-02-04 01:59:07.070861 | orchestrator | 2026-02-04 01:59:07.070870 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-04 01:59:07.070876 | orchestrator | Wednesday 04 February 2026 01:58:55 +0000 (0:00:18.927) 0:00:23.093 **** 2026-02-04 01:59:07.070882 | orchestrator | ok: [testbed-manager] 2026-02-04 01:59:07.070890 | orchestrator | 2026-02-04 01:59:07.070896 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-04 01:59:07.070903 | orchestrator | Wednesday 04 February 2026 01:58:56 +0000 (0:00:01.056) 0:00:24.149 **** 2026-02-04 01:59:07.070909 | orchestrator | changed: [testbed-manager] 2026-02-04 01:59:07.070915 | orchestrator | 2026-02-04 01:59:07.070920 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-04 01:59:07.070927 | orchestrator | Wednesday 04 February 2026 01:58:57 +0000 (0:00:00.900) 0:00:25.049 **** 2026-02-04 01:59:07.070933 | orchestrator | ok: [testbed-manager] 2026-02-04 01:59:07.070939 | orchestrator | 2026-02-04 01:59:07.070946 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-04 01:59:07.070953 | orchestrator | Wednesday 04 February 2026 01:58:58 +0000 (0:00:01.194) 0:00:26.243 **** 2026-02-04 01:59:07.070960 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:59:07.071009 | orchestrator | 2026-02-04 01:59:07.071026 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-04 01:59:07.071033 | orchestrator | Wednesday 04 February 2026 01:58:59 +0000 (0:00:00.160) 0:00:26.404 **** 2026-02-04 01:59:07.071061 | orchestrator | skipping: [testbed-manager] 2026-02-04 01:59:07.071068 | orchestrator | 2026-02-04 01:59:07.071074 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-04 01:59:07.071081 | orchestrator | Wednesday 04 February 2026 01:58:59 +0000 (0:00:00.156) 0:00:26.560 **** 2026-02-04 01:59:07.071087 | orchestrator | changed: [testbed-manager] 2026-02-04 01:59:07.071093 | orchestrator | 2026-02-04 01:59:07.071099 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-04 01:59:07.071105 | orchestrator | Wednesday 04 February 2026 01:59:00 +0000 (0:00:01.036) 0:00:27.597 **** 2026-02-04 01:59:07.071111 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-04 01:59:07.071117 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-04 01:59:07.071126 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-04 01:59:07.071132 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-04 01:59:07.071139 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-04 01:59:07.071146 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-04 01:59:07.071152 | orchestrator | 2026-02-04 01:59:07.071159 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-04 01:59:07.071166 | orchestrator | Wednesday 04 February 2026 01:59:03 +0000 (0:00:03.391) 0:00:30.988 **** 2026-02-04 01:59:07.071171 | orchestrator | ok: [testbed-manager] 2026-02-04 01:59:07.071177 | orchestrator | 2026-02-04 01:59:07.071183 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-04 01:59:07.071188 | orchestrator | Wednesday 04 February 2026 01:59:05 +0000 (0:00:01.524) 0:00:32.513 **** 2026-02-04 01:59:07.071194 | orchestrator | changed: [testbed-manager] 2026-02-04 01:59:07.071200 | orchestrator | 2026-02-04 01:59:07.071206 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 01:59:07.071212 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 01:59:07.071219 | orchestrator | 2026-02-04 01:59:07.071224 | orchestrator | 2026-02-04 01:59:07.071236 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 01:59:07.071243 | orchestrator | Wednesday 04 February 2026 01:59:06 +0000 (0:00:01.577) 0:00:34.090 **** 2026-02-04 01:59:07.071249 | orchestrator | =============================================================================== 2026-02-04 01:59:07.071255 | orchestrator | osism.services.frr : Install frr package ------------------------------- 18.93s 2026-02-04 01:59:07.071260 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.39s 2026-02-04 01:59:07.071267 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.69s 2026-02-04 01:59:07.071272 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.58s 2026-02-04 01:59:07.071278 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.52s 2026-02-04 01:59:07.071303 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.19s 2026-02-04 01:59:07.071310 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.06s 2026-02-04 01:59:07.071316 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.04s 2026-02-04 01:59:07.071322 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.90s 2026-02-04 01:59:07.071327 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.66s 2026-02-04 01:59:07.071334 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-02-04 01:59:07.071339 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.16s 2026-02-04 01:59:07.335071 | orchestrator | + osism apply kubernetes 2026-02-04 01:59:09.310393 | orchestrator | 2026-02-04 01:59:09 | INFO  | Task f3d528cc-1d83-48db-98f5-7de29d915ae4 (kubernetes) was prepared for execution. 2026-02-04 01:59:09.310458 | orchestrator | 2026-02-04 01:59:09 | INFO  | It takes a moment until task f3d528cc-1d83-48db-98f5-7de29d915ae4 (kubernetes) has been started and output is visible here. 2026-02-04 01:59:35.882671 | orchestrator | 2026-02-04 01:59:35.882770 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-04 01:59:35.882784 | orchestrator | 2026-02-04 01:59:35.882792 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-04 01:59:35.882800 | orchestrator | Wednesday 04 February 2026 01:59:14 +0000 (0:00:00.265) 0:00:00.265 **** 2026-02-04 01:59:35.882807 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:59:35.882815 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:59:35.882820 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:59:35.882824 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:59:35.882828 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:59:35.882832 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:59:35.882836 | orchestrator | 2026-02-04 01:59:35.882840 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-04 01:59:35.882844 | orchestrator | Wednesday 04 February 2026 01:59:15 +0000 (0:00:00.926) 0:00:01.191 **** 2026-02-04 01:59:35.882848 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.882853 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.882857 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.882861 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.882865 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.882868 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.882872 | orchestrator | 2026-02-04 01:59:35.882876 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-04 01:59:35.882881 | orchestrator | Wednesday 04 February 2026 01:59:16 +0000 (0:00:00.715) 0:00:01.906 **** 2026-02-04 01:59:35.882885 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.882889 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.882893 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.882896 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.882900 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.882904 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.882908 | orchestrator | 2026-02-04 01:59:35.882912 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-04 01:59:35.882915 | orchestrator | Wednesday 04 February 2026 01:59:17 +0000 (0:00:00.862) 0:00:02.768 **** 2026-02-04 01:59:35.882919 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:59:35.882932 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:59:35.882936 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:59:35.882943 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:59:35.882947 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:59:35.882950 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:59:35.882954 | orchestrator | 2026-02-04 01:59:35.882958 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-04 01:59:35.882963 | orchestrator | Wednesday 04 February 2026 01:59:19 +0000 (0:00:01.735) 0:00:04.504 **** 2026-02-04 01:59:35.882966 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:59:35.882970 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:59:35.883025 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:59:35.883029 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:59:35.883033 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:59:35.883037 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:59:35.883041 | orchestrator | 2026-02-04 01:59:35.883045 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-04 01:59:35.883049 | orchestrator | Wednesday 04 February 2026 01:59:20 +0000 (0:00:01.765) 0:00:06.269 **** 2026-02-04 01:59:35.883053 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:59:35.883074 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:59:35.883078 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:59:35.883082 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:59:35.883086 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:59:35.883089 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:59:35.883093 | orchestrator | 2026-02-04 01:59:35.883102 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-04 01:59:35.883107 | orchestrator | Wednesday 04 February 2026 01:59:21 +0000 (0:00:01.039) 0:00:07.308 **** 2026-02-04 01:59:35.883110 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.883114 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.883118 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.883122 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.883125 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.883129 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.883133 | orchestrator | 2026-02-04 01:59:35.883137 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-04 01:59:35.883162 | orchestrator | Wednesday 04 February 2026 01:59:22 +0000 (0:00:00.641) 0:00:07.950 **** 2026-02-04 01:59:35.883166 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.883170 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.883174 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.883178 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.883181 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.883185 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.883189 | orchestrator | 2026-02-04 01:59:35.883193 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-04 01:59:35.883197 | orchestrator | Wednesday 04 February 2026 01:59:23 +0000 (0:00:00.551) 0:00:08.501 **** 2026-02-04 01:59:35.883201 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:59:35.883205 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:59:35.883209 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.883212 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:59:35.883216 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:59:35.883220 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.883224 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:59:35.883228 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:59:35.883231 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.883235 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:59:35.883253 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:59:35.883258 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.883261 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:59:35.883265 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:59:35.883269 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.883273 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 01:59:35.883277 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 01:59:35.883280 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.883284 | orchestrator | 2026-02-04 01:59:35.883288 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-04 01:59:35.883292 | orchestrator | Wednesday 04 February 2026 01:59:23 +0000 (0:00:00.694) 0:00:09.196 **** 2026-02-04 01:59:35.883296 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.883299 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.883303 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.883311 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.883315 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.883319 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.883323 | orchestrator | 2026-02-04 01:59:35.883326 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-04 01:59:35.883331 | orchestrator | Wednesday 04 February 2026 01:59:25 +0000 (0:00:01.322) 0:00:10.519 **** 2026-02-04 01:59:35.883335 | orchestrator | ok: [testbed-node-3] 2026-02-04 01:59:35.883339 | orchestrator | ok: [testbed-node-4] 2026-02-04 01:59:35.883343 | orchestrator | ok: [testbed-node-5] 2026-02-04 01:59:35.883346 | orchestrator | ok: [testbed-node-0] 2026-02-04 01:59:35.883350 | orchestrator | ok: [testbed-node-1] 2026-02-04 01:59:35.883354 | orchestrator | ok: [testbed-node-2] 2026-02-04 01:59:35.883358 | orchestrator | 2026-02-04 01:59:35.883362 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-04 01:59:35.883365 | orchestrator | Wednesday 04 February 2026 01:59:25 +0000 (0:00:00.849) 0:00:11.368 **** 2026-02-04 01:59:35.883369 | orchestrator | changed: [testbed-node-1] 2026-02-04 01:59:35.883373 | orchestrator | changed: [testbed-node-3] 2026-02-04 01:59:35.883377 | orchestrator | changed: [testbed-node-5] 2026-02-04 01:59:35.883381 | orchestrator | changed: [testbed-node-4] 2026-02-04 01:59:35.883384 | orchestrator | changed: [testbed-node-2] 2026-02-04 01:59:35.883388 | orchestrator | changed: [testbed-node-0] 2026-02-04 01:59:35.883392 | orchestrator | 2026-02-04 01:59:35.883396 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-04 01:59:35.883400 | orchestrator | Wednesday 04 February 2026 01:59:31 +0000 (0:00:05.292) 0:00:16.661 **** 2026-02-04 01:59:35.883403 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.883410 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.883414 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.883418 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.883422 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.883426 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.883430 | orchestrator | 2026-02-04 01:59:35.883433 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-04 01:59:35.883437 | orchestrator | Wednesday 04 February 2026 01:59:32 +0000 (0:00:01.111) 0:00:17.772 **** 2026-02-04 01:59:35.883441 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.883445 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.883449 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.883452 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.883456 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.883460 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.883464 | orchestrator | 2026-02-04 01:59:35.883468 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-04 01:59:35.883473 | orchestrator | Wednesday 04 February 2026 01:59:33 +0000 (0:00:01.607) 0:00:19.380 **** 2026-02-04 01:59:35.883477 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.883481 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.883484 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.883488 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.883492 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.883496 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.883499 | orchestrator | 2026-02-04 01:59:35.883503 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-04 01:59:35.883507 | orchestrator | Wednesday 04 February 2026 01:59:34 +0000 (0:00:00.808) 0:00:20.189 **** 2026-02-04 01:59:35.883511 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-04 01:59:35.883518 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-04 01:59:35.883522 | orchestrator | skipping: [testbed-node-3] 2026-02-04 01:59:35.883526 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-04 01:59:35.883533 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-04 01:59:35.883537 | orchestrator | skipping: [testbed-node-4] 2026-02-04 01:59:35.883540 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-04 01:59:35.883544 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-04 01:59:35.883548 | orchestrator | skipping: [testbed-node-5] 2026-02-04 01:59:35.883552 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-04 01:59:35.883556 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-04 01:59:35.883559 | orchestrator | skipping: [testbed-node-0] 2026-02-04 01:59:35.883563 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-04 01:59:35.883567 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-04 01:59:35.883571 | orchestrator | skipping: [testbed-node-1] 2026-02-04 01:59:35.883574 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-04 01:59:35.883578 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-04 01:59:35.883582 | orchestrator | skipping: [testbed-node-2] 2026-02-04 01:59:35.883586 | orchestrator | 2026-02-04 01:59:35.883590 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-04 01:59:35.883596 | orchestrator | Wednesday 04 February 2026 01:59:35 +0000 (0:00:01.175) 0:00:21.365 **** 2026-02-04 02:00:51.803631 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:00:51.803746 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:00:51.803761 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:00:51.803772 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:00:51.803781 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.803792 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.803802 | orchestrator | 2026-02-04 02:00:51.803813 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-04 02:00:51.803825 | orchestrator | Wednesday 04 February 2026 01:59:36 +0000 (0:00:00.683) 0:00:22.048 **** 2026-02-04 02:00:51.803835 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:00:51.803845 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:00:51.803854 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:00:51.803863 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:00:51.803872 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.803882 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.803891 | orchestrator | 2026-02-04 02:00:51.803900 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-04 02:00:51.803909 | orchestrator | 2026-02-04 02:00:51.803919 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-04 02:00:51.803929 | orchestrator | Wednesday 04 February 2026 01:59:37 +0000 (0:00:01.361) 0:00:23.409 **** 2026-02-04 02:00:51.803938 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:00:51.803948 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:00:51.803958 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:00:51.803967 | orchestrator | 2026-02-04 02:00:51.803977 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-04 02:00:51.803988 | orchestrator | Wednesday 04 February 2026 01:59:39 +0000 (0:00:01.709) 0:00:25.118 **** 2026-02-04 02:00:51.803998 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:00:51.804035 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:00:51.804044 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:00:51.804115 | orchestrator | 2026-02-04 02:00:51.804124 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-04 02:00:51.804134 | orchestrator | Wednesday 04 February 2026 01:59:41 +0000 (0:00:02.003) 0:00:27.122 **** 2026-02-04 02:00:51.804143 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:00:51.804152 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:00:51.804213 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:00:51.804224 | orchestrator | 2026-02-04 02:00:51.804234 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-04 02:00:51.804273 | orchestrator | Wednesday 04 February 2026 01:59:42 +0000 (0:00:00.883) 0:00:28.005 **** 2026-02-04 02:00:51.804337 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:00:51.804348 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:00:51.804358 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:00:51.804368 | orchestrator | 2026-02-04 02:00:51.804376 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-04 02:00:51.804386 | orchestrator | Wednesday 04 February 2026 01:59:43 +0000 (0:00:00.719) 0:00:28.725 **** 2026-02-04 02:00:51.804396 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:00:51.804406 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.804414 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.804424 | orchestrator | 2026-02-04 02:00:51.804433 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-04 02:00:51.804491 | orchestrator | Wednesday 04 February 2026 01:59:43 +0000 (0:00:00.354) 0:00:29.079 **** 2026-02-04 02:00:51.804503 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:00:51.804513 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:00:51.804523 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:00:51.804533 | orchestrator | 2026-02-04 02:00:51.804543 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-04 02:00:51.804553 | orchestrator | Wednesday 04 February 2026 01:59:44 +0000 (0:00:01.245) 0:00:30.325 **** 2026-02-04 02:00:51.804562 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:00:51.804572 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:00:51.804582 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:00:51.804591 | orchestrator | 2026-02-04 02:00:51.804601 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-04 02:00:51.804611 | orchestrator | Wednesday 04 February 2026 01:59:46 +0000 (0:00:01.454) 0:00:31.780 **** 2026-02-04 02:00:51.804621 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:00:51.804633 | orchestrator | 2026-02-04 02:00:51.804643 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-04 02:00:51.804652 | orchestrator | Wednesday 04 February 2026 01:59:46 +0000 (0:00:00.538) 0:00:32.319 **** 2026-02-04 02:00:51.804663 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:00:51.804672 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:00:51.804681 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:00:51.804691 | orchestrator | 2026-02-04 02:00:51.804701 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-04 02:00:51.804711 | orchestrator | Wednesday 04 February 2026 01:59:48 +0000 (0:00:02.027) 0:00:34.347 **** 2026-02-04 02:00:51.804721 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.804731 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.804741 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:00:51.804752 | orchestrator | 2026-02-04 02:00:51.804761 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-04 02:00:51.804770 | orchestrator | Wednesday 04 February 2026 01:59:49 +0000 (0:00:00.510) 0:00:34.857 **** 2026-02-04 02:00:51.804780 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.804789 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.804799 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:00:51.804809 | orchestrator | 2026-02-04 02:00:51.804820 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-04 02:00:51.804830 | orchestrator | Wednesday 04 February 2026 01:59:50 +0000 (0:00:00.754) 0:00:35.611 **** 2026-02-04 02:00:51.804839 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.804848 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.804857 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:00:51.804867 | orchestrator | 2026-02-04 02:00:51.804877 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-04 02:00:51.804911 | orchestrator | Wednesday 04 February 2026 01:59:51 +0000 (0:00:01.292) 0:00:36.904 **** 2026-02-04 02:00:51.804923 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:00:51.804943 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.804954 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.804964 | orchestrator | 2026-02-04 02:00:51.804973 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-04 02:00:51.804984 | orchestrator | Wednesday 04 February 2026 01:59:51 +0000 (0:00:00.326) 0:00:37.231 **** 2026-02-04 02:00:51.804995 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:00:51.805067 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.805078 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.805088 | orchestrator | 2026-02-04 02:00:51.805098 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-04 02:00:51.805108 | orchestrator | Wednesday 04 February 2026 01:59:52 +0000 (0:00:00.611) 0:00:37.843 **** 2026-02-04 02:00:51.805117 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:00:51.805127 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:00:51.805136 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:00:51.805146 | orchestrator | 2026-02-04 02:00:51.805165 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-04 02:00:51.805176 | orchestrator | Wednesday 04 February 2026 01:59:53 +0000 (0:00:01.226) 0:00:39.070 **** 2026-02-04 02:00:51.805186 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:00:51.805197 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:00:51.805206 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:00:51.805215 | orchestrator | 2026-02-04 02:00:51.805224 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-04 02:00:51.805233 | orchestrator | Wednesday 04 February 2026 01:59:56 +0000 (0:00:02.828) 0:00:41.899 **** 2026-02-04 02:00:51.805243 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:00:51.805254 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:00:51.805264 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:00:51.805278 | orchestrator | 2026-02-04 02:00:51.805289 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-04 02:00:51.805299 | orchestrator | Wednesday 04 February 2026 01:59:56 +0000 (0:00:00.332) 0:00:42.231 **** 2026-02-04 02:00:51.805309 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 02:00:51.805320 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 02:00:51.805330 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 02:00:51.805340 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 02:00:51.805389 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 02:00:51.805402 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 02:00:51.805412 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 02:00:51.805422 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 02:00:51.805432 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 02:00:51.805442 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 02:00:51.805450 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 02:00:51.805470 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 02:00:51.805480 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-04 02:00:51.805489 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-04 02:00:51.805499 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-04 02:00:51.805509 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:00:51.805519 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:00:51.805528 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:00:51.805537 | orchestrator | 2026-02-04 02:00:51.805553 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-04 02:00:51.805563 | orchestrator | Wednesday 04 February 2026 02:00:50 +0000 (0:00:53.775) 0:01:36.006 **** 2026-02-04 02:00:51.805572 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:00:51.805582 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:00:51.805592 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:00:51.805601 | orchestrator | 2026-02-04 02:00:51.805611 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-04 02:00:51.805621 | orchestrator | Wednesday 04 February 2026 02:00:50 +0000 (0:00:00.348) 0:01:36.355 **** 2026-02-04 02:00:51.805641 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:01:44.757157 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:01:44.757255 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:01:44.757266 | orchestrator | 2026-02-04 02:01:44.757277 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-04 02:01:44.757286 | orchestrator | Wednesday 04 February 2026 02:00:51 +0000 (0:00:00.939) 0:01:37.295 **** 2026-02-04 02:01:44.757294 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:01:44.757302 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:01:44.757308 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:01:44.757315 | orchestrator | 2026-02-04 02:01:44.757323 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-04 02:01:44.757330 | orchestrator | Wednesday 04 February 2026 02:00:52 +0000 (0:00:01.094) 0:01:38.389 **** 2026-02-04 02:01:44.757337 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:01:44.757345 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:01:44.757352 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:01:44.757359 | orchestrator | 2026-02-04 02:01:44.757367 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-04 02:01:44.757374 | orchestrator | Wednesday 04 February 2026 02:01:30 +0000 (0:00:37.112) 0:02:15.502 **** 2026-02-04 02:01:44.757381 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:01:44.757389 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:01:44.757396 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:01:44.757403 | orchestrator | 2026-02-04 02:01:44.757411 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-04 02:01:44.757418 | orchestrator | Wednesday 04 February 2026 02:01:30 +0000 (0:00:00.688) 0:02:16.190 **** 2026-02-04 02:01:44.757425 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:01:44.757433 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:01:44.757440 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:01:44.757447 | orchestrator | 2026-02-04 02:01:44.757454 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-04 02:01:44.757462 | orchestrator | Wednesday 04 February 2026 02:01:31 +0000 (0:00:00.691) 0:02:16.882 **** 2026-02-04 02:01:44.757469 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:01:44.757476 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:01:44.757483 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:01:44.757491 | orchestrator | 2026-02-04 02:01:44.757498 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-04 02:01:44.757538 | orchestrator | Wednesday 04 February 2026 02:01:31 +0000 (0:00:00.575) 0:02:17.458 **** 2026-02-04 02:01:44.757546 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:01:44.757553 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:01:44.757560 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:01:44.757567 | orchestrator | 2026-02-04 02:01:44.757574 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-04 02:01:44.757580 | orchestrator | Wednesday 04 February 2026 02:01:32 +0000 (0:00:00.743) 0:02:18.202 **** 2026-02-04 02:01:44.757587 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:01:44.757594 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:01:44.757600 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:01:44.757611 | orchestrator | 2026-02-04 02:01:44.757620 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-04 02:01:44.757627 | orchestrator | Wednesday 04 February 2026 02:01:33 +0000 (0:00:00.328) 0:02:18.530 **** 2026-02-04 02:01:44.757634 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:01:44.757640 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:01:44.757647 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:01:44.757654 | orchestrator | 2026-02-04 02:01:44.757660 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-04 02:01:44.757667 | orchestrator | Wednesday 04 February 2026 02:01:33 +0000 (0:00:00.610) 0:02:19.140 **** 2026-02-04 02:01:44.757674 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:01:44.757681 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:01:44.757688 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:01:44.757695 | orchestrator | 2026-02-04 02:01:44.757703 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-04 02:01:44.757711 | orchestrator | Wednesday 04 February 2026 02:01:34 +0000 (0:00:00.623) 0:02:19.764 **** 2026-02-04 02:01:44.757719 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:01:44.757726 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:01:44.757733 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:01:44.757742 | orchestrator | 2026-02-04 02:01:44.757748 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-04 02:01:44.757753 | orchestrator | Wednesday 04 February 2026 02:01:35 +0000 (0:00:00.886) 0:02:20.650 **** 2026-02-04 02:01:44.757761 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:01:44.757767 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:01:44.757772 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:01:44.757777 | orchestrator | 2026-02-04 02:01:44.757782 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-04 02:01:44.757787 | orchestrator | Wednesday 04 February 2026 02:01:36 +0000 (0:00:01.121) 0:02:21.771 **** 2026-02-04 02:01:44.757792 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:01:44.757797 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:01:44.757802 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:01:44.757807 | orchestrator | 2026-02-04 02:01:44.757812 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-04 02:01:44.757817 | orchestrator | Wednesday 04 February 2026 02:01:36 +0000 (0:00:00.304) 0:02:22.076 **** 2026-02-04 02:01:44.757823 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:01:44.757829 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:01:44.757833 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:01:44.757839 | orchestrator | 2026-02-04 02:01:44.757844 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-04 02:01:44.757849 | orchestrator | Wednesday 04 February 2026 02:01:36 +0000 (0:00:00.315) 0:02:22.391 **** 2026-02-04 02:01:44.757854 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:01:44.757859 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:01:44.757865 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:01:44.757870 | orchestrator | 2026-02-04 02:01:44.757875 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-04 02:01:44.757883 | orchestrator | Wednesday 04 February 2026 02:01:37 +0000 (0:00:00.626) 0:02:23.018 **** 2026-02-04 02:01:44.757901 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:01:44.757908 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:01:44.757933 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:01:44.757941 | orchestrator | 2026-02-04 02:01:44.757949 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-04 02:01:44.757958 | orchestrator | Wednesday 04 February 2026 02:01:38 +0000 (0:00:00.927) 0:02:23.945 **** 2026-02-04 02:01:44.757966 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 02:01:44.757974 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 02:01:44.757981 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 02:01:44.757989 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 02:01:44.757997 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 02:01:44.758005 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 02:01:44.758070 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 02:01:44.758079 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 02:01:44.758084 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 02:01:44.758088 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-04 02:01:44.758093 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 02:01:44.758097 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 02:01:44.758102 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-04 02:01:44.758106 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 02:01:44.758110 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 02:01:44.758115 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 02:01:44.758121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 02:01:44.758129 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 02:01:44.758136 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 02:01:44.758143 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 02:01:44.758150 | orchestrator | 2026-02-04 02:01:44.758156 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-04 02:01:44.758163 | orchestrator | 2026-02-04 02:01:44.758169 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-04 02:01:44.758176 | orchestrator | Wednesday 04 February 2026 02:01:41 +0000 (0:00:03.065) 0:02:27.011 **** 2026-02-04 02:01:44.758183 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:01:44.758189 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:01:44.758196 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:01:44.758203 | orchestrator | 2026-02-04 02:01:44.758224 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-04 02:01:44.758231 | orchestrator | Wednesday 04 February 2026 02:01:41 +0000 (0:00:00.353) 0:02:27.364 **** 2026-02-04 02:01:44.758239 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:01:44.758246 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:01:44.758253 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:01:44.758267 | orchestrator | 2026-02-04 02:01:44.758275 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-04 02:01:44.758282 | orchestrator | Wednesday 04 February 2026 02:01:42 +0000 (0:00:00.851) 0:02:28.216 **** 2026-02-04 02:01:44.758289 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:01:44.758296 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:01:44.758303 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:01:44.758310 | orchestrator | 2026-02-04 02:01:44.758317 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-04 02:01:44.758325 | orchestrator | Wednesday 04 February 2026 02:01:43 +0000 (0:00:00.378) 0:02:28.595 **** 2026-02-04 02:01:44.758330 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:01:44.758336 | orchestrator | 2026-02-04 02:01:44.758343 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-04 02:01:44.758350 | orchestrator | Wednesday 04 February 2026 02:01:43 +0000 (0:00:00.547) 0:02:29.142 **** 2026-02-04 02:01:44.758357 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:01:44.758364 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:01:44.758372 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:01:44.758379 | orchestrator | 2026-02-04 02:01:44.758386 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-04 02:01:44.758393 | orchestrator | Wednesday 04 February 2026 02:01:44 +0000 (0:00:00.572) 0:02:29.715 **** 2026-02-04 02:01:44.758400 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:01:44.758407 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:01:44.758415 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:01:44.758422 | orchestrator | 2026-02-04 02:01:44.758429 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-04 02:01:44.758437 | orchestrator | Wednesday 04 February 2026 02:01:44 +0000 (0:00:00.344) 0:02:30.059 **** 2026-02-04 02:01:44.758448 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:03:29.807113 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:03:29.807237 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:03:29.807253 | orchestrator | 2026-02-04 02:03:29.807265 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-04 02:03:29.807277 | orchestrator | Wednesday 04 February 2026 02:01:44 +0000 (0:00:00.354) 0:02:30.414 **** 2026-02-04 02:03:29.807288 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:03:29.807298 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:03:29.807308 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:03:29.807318 | orchestrator | 2026-02-04 02:03:29.807328 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-04 02:03:29.807338 | orchestrator | Wednesday 04 February 2026 02:01:45 +0000 (0:00:00.696) 0:02:31.111 **** 2026-02-04 02:03:29.807348 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:03:29.807358 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:03:29.807368 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:03:29.807378 | orchestrator | 2026-02-04 02:03:29.807389 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-04 02:03:29.807399 | orchestrator | Wednesday 04 February 2026 02:01:46 +0000 (0:00:01.288) 0:02:32.400 **** 2026-02-04 02:03:29.807423 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:03:29.807433 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:03:29.807443 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:03:29.807462 | orchestrator | 2026-02-04 02:03:29.807472 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-04 02:03:29.807482 | orchestrator | Wednesday 04 February 2026 02:01:48 +0000 (0:00:01.217) 0:02:33.617 **** 2026-02-04 02:03:29.807492 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:03:29.807502 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:03:29.807514 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:03:29.807531 | orchestrator | 2026-02-04 02:03:29.807553 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-04 02:03:29.807607 | orchestrator | 2026-02-04 02:03:29.807625 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-04 02:03:29.807641 | orchestrator | Wednesday 04 February 2026 02:01:58 +0000 (0:00:09.968) 0:02:43.586 **** 2026-02-04 02:03:29.807659 | orchestrator | ok: [testbed-manager] 2026-02-04 02:03:29.807677 | orchestrator | 2026-02-04 02:03:29.807694 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-04 02:03:29.807712 | orchestrator | Wednesday 04 February 2026 02:01:58 +0000 (0:00:00.790) 0:02:44.376 **** 2026-02-04 02:03:29.807730 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:29.807747 | orchestrator | 2026-02-04 02:03:29.807761 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 02:03:29.807773 | orchestrator | Wednesday 04 February 2026 02:01:59 +0000 (0:00:00.695) 0:02:45.072 **** 2026-02-04 02:03:29.807786 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 02:03:29.807797 | orchestrator | 2026-02-04 02:03:29.807808 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 02:03:29.807820 | orchestrator | Wednesday 04 February 2026 02:02:00 +0000 (0:00:00.568) 0:02:45.640 **** 2026-02-04 02:03:29.807831 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:29.807842 | orchestrator | 2026-02-04 02:03:29.807853 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-04 02:03:29.807865 | orchestrator | Wednesday 04 February 2026 02:02:01 +0000 (0:00:00.895) 0:02:46.536 **** 2026-02-04 02:03:29.807876 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:29.807887 | orchestrator | 2026-02-04 02:03:29.807900 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-04 02:03:29.807912 | orchestrator | Wednesday 04 February 2026 02:02:01 +0000 (0:00:00.603) 0:02:47.140 **** 2026-02-04 02:03:29.807925 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 02:03:29.807937 | orchestrator | 2026-02-04 02:03:29.807949 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-04 02:03:29.807961 | orchestrator | Wednesday 04 February 2026 02:02:03 +0000 (0:00:01.777) 0:02:48.917 **** 2026-02-04 02:03:29.807973 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 02:03:29.807984 | orchestrator | 2026-02-04 02:03:29.808020 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-04 02:03:29.808035 | orchestrator | Wednesday 04 February 2026 02:02:04 +0000 (0:00:00.936) 0:02:49.853 **** 2026-02-04 02:03:29.808078 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:29.808094 | orchestrator | 2026-02-04 02:03:29.808110 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-04 02:03:29.808125 | orchestrator | Wednesday 04 February 2026 02:02:04 +0000 (0:00:00.464) 0:02:50.318 **** 2026-02-04 02:03:29.808140 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:29.808155 | orchestrator | 2026-02-04 02:03:29.808170 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-04 02:03:29.808186 | orchestrator | 2026-02-04 02:03:29.808201 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-04 02:03:29.808219 | orchestrator | Wednesday 04 February 2026 02:02:05 +0000 (0:00:00.513) 0:02:50.831 **** 2026-02-04 02:03:29.808235 | orchestrator | ok: [testbed-manager] 2026-02-04 02:03:29.808253 | orchestrator | 2026-02-04 02:03:29.808269 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-04 02:03:29.808286 | orchestrator | Wednesday 04 February 2026 02:02:05 +0000 (0:00:00.160) 0:02:50.991 **** 2026-02-04 02:03:29.808303 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 02:03:29.808320 | orchestrator | 2026-02-04 02:03:29.808336 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-04 02:03:29.808351 | orchestrator | Wednesday 04 February 2026 02:02:05 +0000 (0:00:00.473) 0:02:51.464 **** 2026-02-04 02:03:29.808369 | orchestrator | ok: [testbed-manager] 2026-02-04 02:03:29.808384 | orchestrator | 2026-02-04 02:03:29.808419 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-04 02:03:29.808436 | orchestrator | Wednesday 04 February 2026 02:02:06 +0000 (0:00:00.917) 0:02:52.382 **** 2026-02-04 02:03:29.808451 | orchestrator | ok: [testbed-manager] 2026-02-04 02:03:29.808466 | orchestrator | 2026-02-04 02:03:29.808498 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-04 02:03:29.808508 | orchestrator | Wednesday 04 February 2026 02:02:08 +0000 (0:00:01.987) 0:02:54.369 **** 2026-02-04 02:03:29.808518 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:29.808527 | orchestrator | 2026-02-04 02:03:29.808537 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-04 02:03:29.808547 | orchestrator | Wednesday 04 February 2026 02:02:09 +0000 (0:00:00.836) 0:02:55.205 **** 2026-02-04 02:03:29.808557 | orchestrator | ok: [testbed-manager] 2026-02-04 02:03:29.808566 | orchestrator | 2026-02-04 02:03:29.808578 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-04 02:03:29.808593 | orchestrator | Wednesday 04 February 2026 02:02:10 +0000 (0:00:00.477) 0:02:55.683 **** 2026-02-04 02:03:29.808608 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:29.808624 | orchestrator | 2026-02-04 02:03:29.808641 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-04 02:03:29.808656 | orchestrator | Wednesday 04 February 2026 02:02:22 +0000 (0:00:12.387) 0:03:08.071 **** 2026-02-04 02:03:29.808673 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:29.808684 | orchestrator | 2026-02-04 02:03:29.808693 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-04 02:03:29.808703 | orchestrator | Wednesday 04 February 2026 02:02:36 +0000 (0:00:13.558) 0:03:21.629 **** 2026-02-04 02:03:29.808713 | orchestrator | ok: [testbed-manager] 2026-02-04 02:03:29.808722 | orchestrator | 2026-02-04 02:03:29.808732 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-04 02:03:29.808741 | orchestrator | 2026-02-04 02:03:29.808751 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-04 02:03:29.808761 | orchestrator | Wednesday 04 February 2026 02:02:36 +0000 (0:00:00.827) 0:03:22.457 **** 2026-02-04 02:03:29.808770 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:03:29.808780 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:03:29.808790 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:03:29.808799 | orchestrator | 2026-02-04 02:03:29.808809 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-04 02:03:29.808818 | orchestrator | Wednesday 04 February 2026 02:02:37 +0000 (0:00:00.357) 0:03:22.814 **** 2026-02-04 02:03:29.808828 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:29.808837 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:03:29.808847 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:03:29.808856 | orchestrator | 2026-02-04 02:03:29.808866 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-04 02:03:29.808876 | orchestrator | Wednesday 04 February 2026 02:02:37 +0000 (0:00:00.346) 0:03:23.161 **** 2026-02-04 02:03:29.808886 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:03:29.808896 | orchestrator | 2026-02-04 02:03:29.808906 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-04 02:03:29.808915 | orchestrator | Wednesday 04 February 2026 02:02:38 +0000 (0:00:00.551) 0:03:23.712 **** 2026-02-04 02:03:29.808925 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 02:03:29.808935 | orchestrator | 2026-02-04 02:03:29.808944 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-04 02:03:29.808954 | orchestrator | Wednesday 04 February 2026 02:02:39 +0000 (0:00:01.187) 0:03:24.900 **** 2026-02-04 02:03:29.808963 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 02:03:29.808973 | orchestrator | 2026-02-04 02:03:29.808982 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-04 02:03:29.809002 | orchestrator | Wednesday 04 February 2026 02:02:40 +0000 (0:00:00.922) 0:03:25.823 **** 2026-02-04 02:03:29.809012 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:29.809021 | orchestrator | 2026-02-04 02:03:29.809031 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-04 02:03:29.809041 | orchestrator | Wednesday 04 February 2026 02:02:40 +0000 (0:00:00.137) 0:03:25.960 **** 2026-02-04 02:03:29.809077 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 02:03:29.809087 | orchestrator | 2026-02-04 02:03:29.809097 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-04 02:03:29.809107 | orchestrator | Wednesday 04 February 2026 02:02:41 +0000 (0:00:01.126) 0:03:27.087 **** 2026-02-04 02:03:29.809116 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:29.809126 | orchestrator | 2026-02-04 02:03:29.809135 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-04 02:03:29.809145 | orchestrator | Wednesday 04 February 2026 02:02:41 +0000 (0:00:00.140) 0:03:27.228 **** 2026-02-04 02:03:29.809154 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:29.809164 | orchestrator | 2026-02-04 02:03:29.809174 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-04 02:03:29.809183 | orchestrator | Wednesday 04 February 2026 02:02:41 +0000 (0:00:00.171) 0:03:27.400 **** 2026-02-04 02:03:29.809193 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:29.809202 | orchestrator | 2026-02-04 02:03:29.809215 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-04 02:03:29.809243 | orchestrator | Wednesday 04 February 2026 02:02:42 +0000 (0:00:00.164) 0:03:27.564 **** 2026-02-04 02:03:29.809261 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:29.809278 | orchestrator | 2026-02-04 02:03:29.809293 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-04 02:03:29.809309 | orchestrator | Wednesday 04 February 2026 02:02:42 +0000 (0:00:00.114) 0:03:27.679 **** 2026-02-04 02:03:29.809319 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 02:03:29.809329 | orchestrator | 2026-02-04 02:03:29.809339 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-04 02:03:29.809348 | orchestrator | Wednesday 04 February 2026 02:02:47 +0000 (0:00:04.981) 0:03:32.661 **** 2026-02-04 02:03:29.809361 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-04 02:03:29.809376 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-04 02:03:29.809432 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-04 02:03:55.752263 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-04 02:03:55.752340 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-04 02:03:55.752347 | orchestrator | 2026-02-04 02:03:55.752354 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-04 02:03:55.752360 | orchestrator | Wednesday 04 February 2026 02:03:29 +0000 (0:00:42.630) 0:04:15.292 **** 2026-02-04 02:03:55.752365 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 02:03:55.752371 | orchestrator | 2026-02-04 02:03:55.752376 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-04 02:03:55.752381 | orchestrator | Wednesday 04 February 2026 02:03:31 +0000 (0:00:01.422) 0:04:16.715 **** 2026-02-04 02:03:55.752387 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 02:03:55.752392 | orchestrator | 2026-02-04 02:03:55.752397 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-04 02:03:55.752402 | orchestrator | Wednesday 04 February 2026 02:03:32 +0000 (0:00:01.734) 0:04:18.449 **** 2026-02-04 02:03:55.752406 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 02:03:55.752411 | orchestrator | 2026-02-04 02:03:55.752416 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-04 02:03:55.752422 | orchestrator | Wednesday 04 February 2026 02:03:34 +0000 (0:00:01.480) 0:04:19.930 **** 2026-02-04 02:03:55.752445 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:55.752450 | orchestrator | 2026-02-04 02:03:55.752455 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-04 02:03:55.752460 | orchestrator | Wednesday 04 February 2026 02:03:34 +0000 (0:00:00.149) 0:04:20.080 **** 2026-02-04 02:03:55.752465 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-04 02:03:55.752470 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-04 02:03:55.752475 | orchestrator | 2026-02-04 02:03:55.752480 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-04 02:03:55.752485 | orchestrator | Wednesday 04 February 2026 02:03:36 +0000 (0:00:01.948) 0:04:22.028 **** 2026-02-04 02:03:55.752490 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:55.752495 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:03:55.752500 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:03:55.752504 | orchestrator | 2026-02-04 02:03:55.752509 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-04 02:03:55.752514 | orchestrator | Wednesday 04 February 2026 02:03:36 +0000 (0:00:00.373) 0:04:22.401 **** 2026-02-04 02:03:55.752519 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:03:55.752524 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:03:55.752529 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:03:55.752534 | orchestrator | 2026-02-04 02:03:55.752538 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-04 02:03:55.752543 | orchestrator | 2026-02-04 02:03:55.752548 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-04 02:03:55.752553 | orchestrator | Wednesday 04 February 2026 02:03:37 +0000 (0:00:00.967) 0:04:23.369 **** 2026-02-04 02:03:55.752558 | orchestrator | ok: [testbed-manager] 2026-02-04 02:03:55.752563 | orchestrator | 2026-02-04 02:03:55.752568 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-04 02:03:55.752573 | orchestrator | Wednesday 04 February 2026 02:03:38 +0000 (0:00:00.387) 0:04:23.757 **** 2026-02-04 02:03:55.752578 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 02:03:55.752583 | orchestrator | 2026-02-04 02:03:55.752587 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-04 02:03:55.752592 | orchestrator | Wednesday 04 February 2026 02:03:38 +0000 (0:00:00.238) 0:04:23.996 **** 2026-02-04 02:03:55.752597 | orchestrator | changed: [testbed-manager] 2026-02-04 02:03:55.752602 | orchestrator | 2026-02-04 02:03:55.752606 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-04 02:03:55.752611 | orchestrator | 2026-02-04 02:03:55.752616 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-04 02:03:55.752621 | orchestrator | Wednesday 04 February 2026 02:03:44 +0000 (0:00:06.154) 0:04:30.151 **** 2026-02-04 02:03:55.752638 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:03:55.752643 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:03:55.752655 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:03:55.752660 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:03:55.752665 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:03:55.752670 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:03:55.752675 | orchestrator | 2026-02-04 02:03:55.752679 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-04 02:03:55.752684 | orchestrator | Wednesday 04 February 2026 02:03:45 +0000 (0:00:00.617) 0:04:30.768 **** 2026-02-04 02:03:55.752689 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 02:03:55.752694 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 02:03:55.752699 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 02:03:55.752703 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 02:03:55.752714 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 02:03:55.752719 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 02:03:55.752724 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 02:03:55.752729 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 02:03:55.752734 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 02:03:55.752748 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 02:03:55.752753 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 02:03:55.752759 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 02:03:55.752763 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 02:03:55.752768 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 02:03:55.752773 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 02:03:55.752790 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 02:03:55.752795 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 02:03:55.752800 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 02:03:55.752805 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 02:03:55.752810 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 02:03:55.752815 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 02:03:55.752819 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 02:03:55.752824 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 02:03:55.752829 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 02:03:55.752834 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 02:03:55.752839 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 02:03:55.752843 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 02:03:55.752848 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 02:03:55.752853 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 02:03:55.752858 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 02:03:55.752863 | orchestrator | 2026-02-04 02:03:55.752867 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-04 02:03:55.752872 | orchestrator | Wednesday 04 February 2026 02:03:54 +0000 (0:00:09.099) 0:04:39.868 **** 2026-02-04 02:03:55.752877 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:03:55.752882 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:03:55.752887 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:03:55.752891 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:55.752896 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:03:55.752901 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:03:55.752906 | orchestrator | 2026-02-04 02:03:55.752911 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-04 02:03:55.752915 | orchestrator | Wednesday 04 February 2026 02:03:54 +0000 (0:00:00.620) 0:04:40.489 **** 2026-02-04 02:03:55.752920 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:03:55.752929 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:03:55.752934 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:03:55.752938 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:03:55.752943 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:03:55.752948 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:03:55.752953 | orchestrator | 2026-02-04 02:03:55.752958 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:03:55.752963 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:03:55.752971 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-04 02:03:55.752976 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 02:03:55.752981 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 02:03:55.752986 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 02:03:55.752991 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 02:03:55.752995 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 02:03:55.753000 | orchestrator | 2026-02-04 02:03:55.753005 | orchestrator | 2026-02-04 02:03:55.753010 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:03:55.753015 | orchestrator | Wednesday 04 February 2026 02:03:55 +0000 (0:00:00.741) 0:04:41.231 **** 2026-02-04 02:03:55.753023 | orchestrator | =============================================================================== 2026-02-04 02:03:56.192073 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.78s 2026-02-04 02:03:56.192154 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.63s 2026-02-04 02:03:56.192167 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 37.11s 2026-02-04 02:03:56.192178 | orchestrator | kubectl : Install required packages ------------------------------------ 13.56s 2026-02-04 02:03:56.192189 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 12.39s 2026-02-04 02:03:56.192199 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.97s 2026-02-04 02:03:56.192219 | orchestrator | Manage labels ----------------------------------------------------------- 9.10s 2026-02-04 02:03:56.192225 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.15s 2026-02-04 02:03:56.192239 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.29s 2026-02-04 02:03:56.192245 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.98s 2026-02-04 02:03:56.192252 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.07s 2026-02-04 02:03:56.192259 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.83s 2026-02-04 02:03:56.192265 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.03s 2026-02-04 02:03:56.192271 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.00s 2026-02-04 02:03:56.192277 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.99s 2026-02-04 02:03:56.192283 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.95s 2026-02-04 02:03:56.192289 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.78s 2026-02-04 02:03:56.192319 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.77s 2026-02-04 02:03:56.192326 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.74s 2026-02-04 02:03:56.192331 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.73s 2026-02-04 02:03:56.548975 | orchestrator | + osism apply copy-kubeconfig 2026-02-04 02:04:08.911839 | orchestrator | 2026-02-04 02:04:08 | INFO  | Task 67b01f18-52d6-44e9-b8a5-86df332d72d5 (copy-kubeconfig) was prepared for execution. 2026-02-04 02:04:08.911947 | orchestrator | 2026-02-04 02:04:08 | INFO  | It takes a moment until task 67b01f18-52d6-44e9-b8a5-86df332d72d5 (copy-kubeconfig) has been started and output is visible here. 2026-02-04 02:04:16.590361 | orchestrator | 2026-02-04 02:04:16.590445 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-04 02:04:16.590460 | orchestrator | 2026-02-04 02:04:16.590468 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 02:04:16.590475 | orchestrator | Wednesday 04 February 2026 02:04:13 +0000 (0:00:00.178) 0:00:00.178 **** 2026-02-04 02:04:16.590483 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 02:04:16.590491 | orchestrator | 2026-02-04 02:04:16.590498 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 02:04:16.590505 | orchestrator | Wednesday 04 February 2026 02:04:14 +0000 (0:00:00.801) 0:00:00.980 **** 2026-02-04 02:04:16.590532 | orchestrator | changed: [testbed-manager] 2026-02-04 02:04:16.590541 | orchestrator | 2026-02-04 02:04:16.590549 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-04 02:04:16.590556 | orchestrator | Wednesday 04 February 2026 02:04:15 +0000 (0:00:01.326) 0:00:02.306 **** 2026-02-04 02:04:16.590568 | orchestrator | changed: [testbed-manager] 2026-02-04 02:04:16.590575 | orchestrator | 2026-02-04 02:04:16.590586 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:04:16.590595 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:04:16.590615 | orchestrator | 2026-02-04 02:04:16.590631 | orchestrator | 2026-02-04 02:04:16.590639 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:04:16.590646 | orchestrator | Wednesday 04 February 2026 02:04:16 +0000 (0:00:00.533) 0:00:02.840 **** 2026-02-04 02:04:16.590654 | orchestrator | =============================================================================== 2026-02-04 02:04:16.590669 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.33s 2026-02-04 02:04:16.590677 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2026-02-04 02:04:16.590685 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.53s 2026-02-04 02:04:16.949655 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-04 02:04:29.388967 | orchestrator | 2026-02-04 02:04:29 | INFO  | Task 70f7aafb-d6fd-4278-b648-6db3b37a9130 (openstackclient) was prepared for execution. 2026-02-04 02:04:29.389072 | orchestrator | 2026-02-04 02:04:29 | INFO  | It takes a moment until task 70f7aafb-d6fd-4278-b648-6db3b37a9130 (openstackclient) has been started and output is visible here. 2026-02-04 02:05:18.542738 | orchestrator | 2026-02-04 02:05:18.542843 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-04 02:05:18.542855 | orchestrator | 2026-02-04 02:05:18.542861 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-04 02:05:18.542868 | orchestrator | Wednesday 04 February 2026 02:04:34 +0000 (0:00:00.245) 0:00:00.245 **** 2026-02-04 02:05:18.542876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-04 02:05:18.542883 | orchestrator | 2026-02-04 02:05:18.542916 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-04 02:05:18.542924 | orchestrator | Wednesday 04 February 2026 02:04:34 +0000 (0:00:00.260) 0:00:00.505 **** 2026-02-04 02:05:18.542931 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-04 02:05:18.542940 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-04 02:05:18.542947 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-04 02:05:18.542954 | orchestrator | 2026-02-04 02:05:18.542961 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-04 02:05:18.542967 | orchestrator | Wednesday 04 February 2026 02:04:35 +0000 (0:00:01.355) 0:00:01.861 **** 2026-02-04 02:05:18.542975 | orchestrator | changed: [testbed-manager] 2026-02-04 02:05:18.542981 | orchestrator | 2026-02-04 02:05:18.542988 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-04 02:05:18.542995 | orchestrator | Wednesday 04 February 2026 02:04:37 +0000 (0:00:01.618) 0:00:03.480 **** 2026-02-04 02:05:18.543002 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-04 02:05:18.543009 | orchestrator | ok: [testbed-manager] 2026-02-04 02:05:18.543016 | orchestrator | 2026-02-04 02:05:18.543022 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-04 02:05:18.543028 | orchestrator | Wednesday 04 February 2026 02:05:13 +0000 (0:00:35.620) 0:00:39.100 **** 2026-02-04 02:05:18.543035 | orchestrator | changed: [testbed-manager] 2026-02-04 02:05:18.543087 | orchestrator | 2026-02-04 02:05:18.543095 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-04 02:05:18.543101 | orchestrator | Wednesday 04 February 2026 02:05:14 +0000 (0:00:01.011) 0:00:40.112 **** 2026-02-04 02:05:18.543108 | orchestrator | ok: [testbed-manager] 2026-02-04 02:05:18.543114 | orchestrator | 2026-02-04 02:05:18.543120 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-04 02:05:18.543127 | orchestrator | Wednesday 04 February 2026 02:05:14 +0000 (0:00:00.673) 0:00:40.786 **** 2026-02-04 02:05:18.543132 | orchestrator | changed: [testbed-manager] 2026-02-04 02:05:18.543139 | orchestrator | 2026-02-04 02:05:18.543145 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-04 02:05:18.543151 | orchestrator | Wednesday 04 February 2026 02:05:16 +0000 (0:00:01.461) 0:00:42.247 **** 2026-02-04 02:05:18.543156 | orchestrator | changed: [testbed-manager] 2026-02-04 02:05:18.543162 | orchestrator | 2026-02-04 02:05:18.543169 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-04 02:05:18.543175 | orchestrator | Wednesday 04 February 2026 02:05:17 +0000 (0:00:00.830) 0:00:43.078 **** 2026-02-04 02:05:18.543180 | orchestrator | changed: [testbed-manager] 2026-02-04 02:05:18.543186 | orchestrator | 2026-02-04 02:05:18.543191 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-04 02:05:18.543197 | orchestrator | Wednesday 04 February 2026 02:05:17 +0000 (0:00:00.619) 0:00:43.698 **** 2026-02-04 02:05:18.543202 | orchestrator | ok: [testbed-manager] 2026-02-04 02:05:18.543219 | orchestrator | 2026-02-04 02:05:18.543225 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:05:18.543231 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:05:18.543239 | orchestrator | 2026-02-04 02:05:18.543245 | orchestrator | 2026-02-04 02:05:18.543250 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:05:18.543256 | orchestrator | Wednesday 04 February 2026 02:05:18 +0000 (0:00:00.416) 0:00:44.114 **** 2026-02-04 02:05:18.543262 | orchestrator | =============================================================================== 2026-02-04 02:05:18.543267 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.62s 2026-02-04 02:05:18.543273 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.62s 2026-02-04 02:05:18.543290 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.46s 2026-02-04 02:05:18.543297 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.36s 2026-02-04 02:05:18.543304 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.01s 2026-02-04 02:05:18.543310 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.83s 2026-02-04 02:05:18.543316 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.67s 2026-02-04 02:05:18.543323 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.62s 2026-02-04 02:05:18.543331 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2026-02-04 02:05:18.543338 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.26s 2026-02-04 02:05:21.177127 | orchestrator | 2026-02-04 02:05:21 | INFO  | Task 94172ae0-4cdf-41fb-ae55-9ea64c4602e9 (common) was prepared for execution. 2026-02-04 02:05:21.177214 | orchestrator | 2026-02-04 02:05:21 | INFO  | It takes a moment until task 94172ae0-4cdf-41fb-ae55-9ea64c4602e9 (common) has been started and output is visible here. 2026-02-04 02:05:34.270805 | orchestrator | 2026-02-04 02:05:34.270898 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-04 02:05:34.270907 | orchestrator | 2026-02-04 02:05:34.270911 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-04 02:05:34.270916 | orchestrator | Wednesday 04 February 2026 02:05:25 +0000 (0:00:00.386) 0:00:00.386 **** 2026-02-04 02:05:34.270921 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:05:34.270927 | orchestrator | 2026-02-04 02:05:34.270931 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-04 02:05:34.270935 | orchestrator | Wednesday 04 February 2026 02:05:27 +0000 (0:00:01.482) 0:00:01.868 **** 2026-02-04 02:05:34.270939 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 02:05:34.270943 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 02:05:34.270947 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 02:05:34.270951 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 02:05:34.270955 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 02:05:34.270958 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 02:05:34.270962 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 02:05:34.270966 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 02:05:34.270970 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 02:05:34.270989 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 02:05:34.270993 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 02:05:34.270997 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 02:05:34.271002 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 02:05:34.271006 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 02:05:34.271010 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 02:05:34.271014 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 02:05:34.271017 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 02:05:34.271107 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 02:05:34.271114 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 02:05:34.271118 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 02:05:34.271122 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 02:05:34.271128 | orchestrator | 2026-02-04 02:05:34.271134 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-04 02:05:34.271141 | orchestrator | Wednesday 04 February 2026 02:05:29 +0000 (0:00:02.608) 0:00:04.476 **** 2026-02-04 02:05:34.271147 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:05:34.271154 | orchestrator | 2026-02-04 02:05:34.271161 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-04 02:05:34.271171 | orchestrator | Wednesday 04 February 2026 02:05:31 +0000 (0:00:01.490) 0:00:05.967 **** 2026-02-04 02:05:34.271180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:34.271189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:34.271208 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:34.271213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:34.271217 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:34.271221 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:34.271231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:34.271235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:34.271239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:34.271252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315605 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315736 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315753 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:35.315757 | orchestrator | 2026-02-04 02:05:35.315762 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-04 02:05:35.315767 | orchestrator | Wednesday 04 February 2026 02:05:34 +0000 (0:00:03.528) 0:00:09.496 **** 2026-02-04 02:05:35.315773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:35.315781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.315788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.315795 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:05:35.315802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:35.315817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965630 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:05:35.965683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:35.965694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965708 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:05:35.965714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:35.965724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965738 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:05:35.965761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:35.965776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965790 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:05:35.965796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:35.965800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:35.965808 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:05:35.965812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:35.965819 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935156 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:05:36.935163 | orchestrator | 2026-02-04 02:05:36.935169 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-04 02:05:36.935174 | orchestrator | Wednesday 04 February 2026 02:05:35 +0000 (0:00:00.996) 0:00:10.492 **** 2026-02-04 02:05:36.935180 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:36.935186 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935191 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:36.935217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935239 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:05:36.935243 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:05:36.935263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:36.935268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935276 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:05:36.935280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:36.935284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:36.935298 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:05:36.935302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:36.935316 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:42.412385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:42.412470 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:05:42.412487 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:42.412499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:42.412509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:42.412518 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:05:42.412528 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 02:05:42.412555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:42.412565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:42.412573 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:05:42.412582 | orchestrator | 2026-02-04 02:05:42.412591 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-04 02:05:42.412601 | orchestrator | Wednesday 04 February 2026 02:05:37 +0000 (0:00:01.991) 0:00:12.484 **** 2026-02-04 02:05:42.412608 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:05:42.412616 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:05:42.412624 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:05:42.412633 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:05:42.412656 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:05:42.412665 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:05:42.412674 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:05:42.412682 | orchestrator | 2026-02-04 02:05:42.412691 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-04 02:05:42.412699 | orchestrator | Wednesday 04 February 2026 02:05:38 +0000 (0:00:00.742) 0:00:13.226 **** 2026-02-04 02:05:42.412707 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:05:42.412715 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:05:42.412723 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:05:42.412731 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:05:42.412739 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:05:42.412746 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:05:42.412758 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:05:42.412768 | orchestrator | 2026-02-04 02:05:42.412777 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-04 02:05:42.412785 | orchestrator | Wednesday 04 February 2026 02:05:39 +0000 (0:00:00.945) 0:00:14.172 **** 2026-02-04 02:05:42.412796 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:42.412821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:42.412838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:42.412850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:42.412858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:42.412865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:42.412886 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:45.242229 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242419 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242426 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242458 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242471 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:45.242476 | orchestrator | 2026-02-04 02:05:45.242484 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-04 02:05:45.242491 | orchestrator | Wednesday 04 February 2026 02:05:43 +0000 (0:00:03.514) 0:00:17.687 **** 2026-02-04 02:05:45.242497 | orchestrator | [WARNING]: Skipped 2026-02-04 02:05:45.242505 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-04 02:05:45.242513 | orchestrator | to this access issue: 2026-02-04 02:05:45.242520 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-04 02:05:45.242526 | orchestrator | directory 2026-02-04 02:05:45.242531 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 02:05:45.242539 | orchestrator | 2026-02-04 02:05:45.242545 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-04 02:05:45.242550 | orchestrator | Wednesday 04 February 2026 02:05:44 +0000 (0:00:01.018) 0:00:18.705 **** 2026-02-04 02:05:45.242556 | orchestrator | [WARNING]: Skipped 2026-02-04 02:05:45.242568 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-04 02:05:55.808464 | orchestrator | to this access issue: 2026-02-04 02:05:55.808537 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-04 02:05:55.808544 | orchestrator | directory 2026-02-04 02:05:55.808549 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 02:05:55.808555 | orchestrator | 2026-02-04 02:05:55.808560 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-04 02:05:55.808565 | orchestrator | Wednesday 04 February 2026 02:05:45 +0000 (0:00:01.386) 0:00:20.091 **** 2026-02-04 02:05:55.808582 | orchestrator | [WARNING]: Skipped 2026-02-04 02:05:55.808586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-04 02:05:55.808590 | orchestrator | to this access issue: 2026-02-04 02:05:55.808594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-04 02:05:55.808598 | orchestrator | directory 2026-02-04 02:05:55.808602 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 02:05:55.808606 | orchestrator | 2026-02-04 02:05:55.808612 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-04 02:05:55.808618 | orchestrator | Wednesday 04 February 2026 02:05:46 +0000 (0:00:00.940) 0:00:21.032 **** 2026-02-04 02:05:55.808624 | orchestrator | [WARNING]: Skipped 2026-02-04 02:05:55.808631 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-04 02:05:55.808637 | orchestrator | to this access issue: 2026-02-04 02:05:55.808642 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-04 02:05:55.808648 | orchestrator | directory 2026-02-04 02:05:55.808654 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 02:05:55.808660 | orchestrator | 2026-02-04 02:05:55.808666 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-04 02:05:55.808673 | orchestrator | Wednesday 04 February 2026 02:05:47 +0000 (0:00:00.926) 0:00:21.958 **** 2026-02-04 02:05:55.808679 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:05:55.808685 | orchestrator | changed: [testbed-manager] 2026-02-04 02:05:55.808691 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:05:55.808697 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:05:55.808704 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:05:55.808710 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:05:55.808724 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:05:55.808731 | orchestrator | 2026-02-04 02:05:55.808738 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-04 02:05:55.808744 | orchestrator | Wednesday 04 February 2026 02:05:50 +0000 (0:00:02.594) 0:00:24.553 **** 2026-02-04 02:05:55.808751 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 02:05:55.808759 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 02:05:55.808765 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 02:05:55.808772 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 02:05:55.808778 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 02:05:55.808785 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 02:05:55.808795 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 02:05:55.808801 | orchestrator | 2026-02-04 02:05:55.808809 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-04 02:05:55.808816 | orchestrator | Wednesday 04 February 2026 02:05:52 +0000 (0:00:02.211) 0:00:26.764 **** 2026-02-04 02:05:55.808823 | orchestrator | changed: [testbed-manager] 2026-02-04 02:05:55.808830 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:05:55.808837 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:05:55.808844 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:05:55.808851 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:05:55.808858 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:05:55.808865 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:05:55.808871 | orchestrator | 2026-02-04 02:05:55.808877 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-04 02:05:55.808890 | orchestrator | Wednesday 04 February 2026 02:05:54 +0000 (0:00:02.009) 0:00:28.774 **** 2026-02-04 02:05:55.808897 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:55.808913 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:55.808918 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:55.808922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:55.808926 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:55.808933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:55.808937 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:05:55.808946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:05:55.808958 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:05:55.808972 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.025144 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.025220 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:06:02.025228 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.025248 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.025253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:06:02.025273 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.025278 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.025301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:06:02.025306 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.025312 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.025318 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.025323 | orchestrator | 2026-02-04 02:06:02.025329 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-04 02:06:02.025336 | orchestrator | Wednesday 04 February 2026 02:05:56 +0000 (0:00:01.852) 0:00:30.627 **** 2026-02-04 02:06:02.025341 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 02:06:02.025347 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 02:06:02.025357 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 02:06:02.025362 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 02:06:02.025367 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 02:06:02.025372 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 02:06:02.025376 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 02:06:02.025381 | orchestrator | 2026-02-04 02:06:02.025386 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-04 02:06:02.025391 | orchestrator | Wednesday 04 February 2026 02:05:58 +0000 (0:00:02.074) 0:00:32.701 **** 2026-02-04 02:06:02.025396 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 02:06:02.025401 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 02:06:02.025406 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 02:06:02.025415 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 02:06:02.025420 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 02:06:02.025425 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 02:06:02.025430 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 02:06:02.025435 | orchestrator | 2026-02-04 02:06:02.025439 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-04 02:06:02.025444 | orchestrator | Wednesday 04 February 2026 02:05:59 +0000 (0:00:01.751) 0:00:34.453 **** 2026-02-04 02:06:02.025449 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.025459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.583172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.583247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.583273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.583289 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.583294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 02:06:02.583299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583305 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583340 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583346 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:06:02.583373 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:07:21.225880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:07:21.226071 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:07:21.226088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:07:21.226108 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:07:21.226115 | orchestrator | 2026-02-04 02:07:21.226124 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-04 02:07:21.226132 | orchestrator | Wednesday 04 February 2026 02:06:02 +0000 (0:00:02.660) 0:00:37.113 **** 2026-02-04 02:07:21.226138 | orchestrator | changed: [testbed-manager] 2026-02-04 02:07:21.226146 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:07:21.226152 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:07:21.226158 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:07:21.226165 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:07:21.226170 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:07:21.226176 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:07:21.226182 | orchestrator | 2026-02-04 02:07:21.226188 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-04 02:07:21.226195 | orchestrator | Wednesday 04 February 2026 02:06:04 +0000 (0:00:01.432) 0:00:38.546 **** 2026-02-04 02:07:21.226201 | orchestrator | changed: [testbed-manager] 2026-02-04 02:07:21.226207 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:07:21.226213 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:07:21.226220 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:07:21.226226 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:07:21.226232 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:07:21.226238 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:07:21.226244 | orchestrator | 2026-02-04 02:07:21.226251 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 02:07:21.226270 | orchestrator | Wednesday 04 February 2026 02:06:05 +0000 (0:00:01.155) 0:00:39.701 **** 2026-02-04 02:07:21.226283 | orchestrator | 2026-02-04 02:07:21.226289 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 02:07:21.226295 | orchestrator | Wednesday 04 February 2026 02:06:05 +0000 (0:00:00.072) 0:00:39.774 **** 2026-02-04 02:07:21.226301 | orchestrator | 2026-02-04 02:07:21.226306 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 02:07:21.226312 | orchestrator | Wednesday 04 February 2026 02:06:05 +0000 (0:00:00.085) 0:00:39.860 **** 2026-02-04 02:07:21.226318 | orchestrator | 2026-02-04 02:07:21.226324 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 02:07:21.226330 | orchestrator | Wednesday 04 February 2026 02:06:05 +0000 (0:00:00.075) 0:00:39.935 **** 2026-02-04 02:07:21.226335 | orchestrator | 2026-02-04 02:07:21.226340 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 02:07:21.226355 | orchestrator | Wednesday 04 February 2026 02:06:05 +0000 (0:00:00.265) 0:00:40.200 **** 2026-02-04 02:07:21.226360 | orchestrator | 2026-02-04 02:07:21.226367 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 02:07:21.226372 | orchestrator | Wednesday 04 February 2026 02:06:05 +0000 (0:00:00.089) 0:00:40.290 **** 2026-02-04 02:07:21.226378 | orchestrator | 2026-02-04 02:07:21.226384 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 02:07:21.226390 | orchestrator | Wednesday 04 February 2026 02:06:05 +0000 (0:00:00.088) 0:00:40.379 **** 2026-02-04 02:07:21.226395 | orchestrator | 2026-02-04 02:07:21.226401 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-04 02:07:21.226407 | orchestrator | Wednesday 04 February 2026 02:06:05 +0000 (0:00:00.091) 0:00:40.471 **** 2026-02-04 02:07:21.226412 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:07:21.226418 | orchestrator | changed: [testbed-manager] 2026-02-04 02:07:21.226423 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:07:21.226429 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:07:21.226435 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:07:21.226459 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:07:21.226466 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:07:21.226473 | orchestrator | 2026-02-04 02:07:21.226479 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-04 02:07:21.226486 | orchestrator | Wednesday 04 February 2026 02:06:37 +0000 (0:00:31.541) 0:01:12.013 **** 2026-02-04 02:07:21.226492 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:07:21.226498 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:07:21.226504 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:07:21.226511 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:07:21.226517 | orchestrator | changed: [testbed-manager] 2026-02-04 02:07:21.226523 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:07:21.226529 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:07:21.226535 | orchestrator | 2026-02-04 02:07:21.226542 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-04 02:07:21.226549 | orchestrator | Wednesday 04 February 2026 02:07:10 +0000 (0:00:32.926) 0:01:44.939 **** 2026-02-04 02:07:21.226556 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:07:21.226564 | orchestrator | ok: [testbed-manager] 2026-02-04 02:07:21.226571 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:07:21.226578 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:07:21.226584 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:07:21.226591 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:07:21.226597 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:07:21.226603 | orchestrator | 2026-02-04 02:07:21.226610 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-04 02:07:21.226616 | orchestrator | Wednesday 04 February 2026 02:07:12 +0000 (0:00:01.978) 0:01:46.918 **** 2026-02-04 02:07:21.226622 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:07:21.226628 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:07:21.226634 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:07:21.226640 | orchestrator | changed: [testbed-manager] 2026-02-04 02:07:21.226646 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:07:21.226652 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:07:21.226657 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:07:21.226663 | orchestrator | 2026-02-04 02:07:21.226669 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:07:21.226678 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 02:07:21.226685 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 02:07:21.226702 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 02:07:21.226716 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 02:07:21.226722 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 02:07:21.226728 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 02:07:21.226734 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 02:07:21.226740 | orchestrator | 2026-02-04 02:07:21.226745 | orchestrator | 2026-02-04 02:07:21.226751 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:07:21.226757 | orchestrator | Wednesday 04 February 2026 02:07:21 +0000 (0:00:08.803) 0:01:55.722 **** 2026-02-04 02:07:21.226762 | orchestrator | =============================================================================== 2026-02-04 02:07:21.226768 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.93s 2026-02-04 02:07:21.226774 | orchestrator | common : Restart fluentd container ------------------------------------- 31.54s 2026-02-04 02:07:21.226779 | orchestrator | common : Restart cron container ----------------------------------------- 8.80s 2026-02-04 02:07:21.226786 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.53s 2026-02-04 02:07:21.226791 | orchestrator | common : Copying over config.json files for services -------------------- 3.51s 2026-02-04 02:07:21.226796 | orchestrator | common : Check common containers ---------------------------------------- 2.66s 2026-02-04 02:07:21.226802 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.61s 2026-02-04 02:07:21.226807 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.59s 2026-02-04 02:07:21.226813 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.21s 2026-02-04 02:07:21.226820 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.07s 2026-02-04 02:07:21.226826 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.01s 2026-02-04 02:07:21.226832 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.99s 2026-02-04 02:07:21.226838 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.98s 2026-02-04 02:07:21.226845 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.85s 2026-02-04 02:07:21.226851 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.75s 2026-02-04 02:07:21.226857 | orchestrator | common : include_tasks -------------------------------------------------- 1.49s 2026-02-04 02:07:21.226873 | orchestrator | common : include_tasks -------------------------------------------------- 1.48s 2026-02-04 02:07:21.654363 | orchestrator | common : Creating log volume -------------------------------------------- 1.43s 2026-02-04 02:07:21.654434 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.39s 2026-02-04 02:07:21.654441 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.16s 2026-02-04 02:07:24.393768 | orchestrator | 2026-02-04 02:07:24 | INFO  | Task 6311b257-82b6-442e-85e3-2c5fc1fc4207 (loadbalancer) was prepared for execution. 2026-02-04 02:07:24.393839 | orchestrator | 2026-02-04 02:07:24 | INFO  | It takes a moment until task 6311b257-82b6-442e-85e3-2c5fc1fc4207 (loadbalancer) has been started and output is visible here. 2026-02-04 02:07:38.766697 | orchestrator | 2026-02-04 02:07:38.766800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:07:38.766815 | orchestrator | 2026-02-04 02:07:38.766826 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:07:38.766837 | orchestrator | Wednesday 04 February 2026 02:07:29 +0000 (0:00:00.282) 0:00:00.282 **** 2026-02-04 02:07:38.766871 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:07:38.766883 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:07:38.766893 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:07:38.766903 | orchestrator | 2026-02-04 02:07:38.766912 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:07:38.766922 | orchestrator | Wednesday 04 February 2026 02:07:29 +0000 (0:00:00.306) 0:00:00.589 **** 2026-02-04 02:07:38.766933 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-04 02:07:38.766943 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-04 02:07:38.766952 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-04 02:07:38.766962 | orchestrator | 2026-02-04 02:07:38.766972 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-04 02:07:38.766981 | orchestrator | 2026-02-04 02:07:38.767063 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-04 02:07:38.767099 | orchestrator | Wednesday 04 February 2026 02:07:29 +0000 (0:00:00.491) 0:00:01.081 **** 2026-02-04 02:07:38.767117 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:07:38.767133 | orchestrator | 2026-02-04 02:07:38.767151 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-04 02:07:38.767168 | orchestrator | Wednesday 04 February 2026 02:07:30 +0000 (0:00:00.588) 0:00:01.669 **** 2026-02-04 02:07:38.767184 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:07:38.767199 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:07:38.767209 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:07:38.767220 | orchestrator | 2026-02-04 02:07:38.767232 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-04 02:07:38.767244 | orchestrator | Wednesday 04 February 2026 02:07:31 +0000 (0:00:00.613) 0:00:02.283 **** 2026-02-04 02:07:38.767255 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:07:38.767266 | orchestrator | 2026-02-04 02:07:38.767278 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-04 02:07:38.767289 | orchestrator | Wednesday 04 February 2026 02:07:31 +0000 (0:00:00.735) 0:00:03.018 **** 2026-02-04 02:07:38.767301 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:07:38.767312 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:07:38.767324 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:07:38.767335 | orchestrator | 2026-02-04 02:07:38.767347 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-04 02:07:38.767356 | orchestrator | Wednesday 04 February 2026 02:07:32 +0000 (0:00:00.584) 0:00:03.603 **** 2026-02-04 02:07:38.767366 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 02:07:38.767376 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 02:07:38.767386 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 02:07:38.767395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 02:07:38.767405 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 02:07:38.767414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 02:07:38.767424 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 02:07:38.767434 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 02:07:38.767449 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 02:07:38.767464 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 02:07:38.767491 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 02:07:38.767507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 02:07:38.767524 | orchestrator | 2026-02-04 02:07:38.767541 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 02:07:38.767557 | orchestrator | Wednesday 04 February 2026 02:07:34 +0000 (0:00:02.122) 0:00:05.726 **** 2026-02-04 02:07:38.767570 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-04 02:07:38.767580 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-04 02:07:38.767590 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-04 02:07:38.767600 | orchestrator | 2026-02-04 02:07:38.767609 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 02:07:38.767622 | orchestrator | Wednesday 04 February 2026 02:07:35 +0000 (0:00:00.684) 0:00:06.411 **** 2026-02-04 02:07:38.767639 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-04 02:07:38.767654 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-04 02:07:38.767670 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-04 02:07:38.767686 | orchestrator | 2026-02-04 02:07:38.767703 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 02:07:38.767719 | orchestrator | Wednesday 04 February 2026 02:07:36 +0000 (0:00:01.203) 0:00:07.615 **** 2026-02-04 02:07:38.767736 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-04 02:07:38.767747 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:07:38.767777 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-04 02:07:38.767787 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:07:38.767797 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-04 02:07:38.767808 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:07:38.767825 | orchestrator | 2026-02-04 02:07:38.767840 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-04 02:07:38.767856 | orchestrator | Wednesday 04 February 2026 02:07:36 +0000 (0:00:00.567) 0:00:08.182 **** 2026-02-04 02:07:38.767883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:38.767908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:38.767927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:38.767946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:38.767957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:38.767976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:44.062939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:07:44.063120 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:07:44.063141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:07:44.063152 | orchestrator | 2026-02-04 02:07:44.063177 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-04 02:07:44.063930 | orchestrator | Wednesday 04 February 2026 02:07:38 +0000 (0:00:01.763) 0:00:09.946 **** 2026-02-04 02:07:44.063969 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:07:44.064031 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:07:44.064039 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:07:44.064046 | orchestrator | 2026-02-04 02:07:44.064053 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-04 02:07:44.064059 | orchestrator | Wednesday 04 February 2026 02:07:39 +0000 (0:00:00.901) 0:00:10.847 **** 2026-02-04 02:07:44.064066 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-04 02:07:44.064073 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-04 02:07:44.064080 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-04 02:07:44.064086 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-04 02:07:44.064093 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-04 02:07:44.064099 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-04 02:07:44.064105 | orchestrator | 2026-02-04 02:07:44.064112 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-04 02:07:44.064122 | orchestrator | Wednesday 04 February 2026 02:07:41 +0000 (0:00:01.491) 0:00:12.339 **** 2026-02-04 02:07:44.064133 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:07:44.064143 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:07:44.064154 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:07:44.064165 | orchestrator | 2026-02-04 02:07:44.064176 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-04 02:07:44.064188 | orchestrator | Wednesday 04 February 2026 02:07:42 +0000 (0:00:00.892) 0:00:13.231 **** 2026-02-04 02:07:44.064200 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:07:44.064212 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:07:44.064224 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:07:44.064234 | orchestrator | 2026-02-04 02:07:44.064246 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-04 02:07:44.064254 | orchestrator | Wednesday 04 February 2026 02:07:43 +0000 (0:00:01.398) 0:00:14.630 **** 2026-02-04 02:07:44.064263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:07:44.064291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:07:44.064299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:07:44.064307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 02:07:44.064322 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:07:44.064329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:07:44.064368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:07:44.064376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:07:44.064383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 02:07:44.064389 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:07:44.064400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:07:46.836318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:07:46.836411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:07:46.836417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 02:07:46.836422 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:07:46.836428 | orchestrator | 2026-02-04 02:07:46.836433 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-04 02:07:46.836438 | orchestrator | Wednesday 04 February 2026 02:07:44 +0000 (0:00:00.608) 0:00:15.238 **** 2026-02-04 02:07:46.836442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:46.836447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:46.836451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:46.836480 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:46.836485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:07:46.836492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 02:07:46.836499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:46.836505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:07:46.836512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 02:07:46.836533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:55.366425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:07:55.366529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5', '__omit_place_holder__d8095cf15ff0ab20c798578b36489f86bbd596d5'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 02:07:55.366541 | orchestrator | 2026-02-04 02:07:55.366547 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-04 02:07:55.366553 | orchestrator | Wednesday 04 February 2026 02:07:46 +0000 (0:00:02.777) 0:00:18.016 **** 2026-02-04 02:07:55.366557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:55.366564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:55.366568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 02:07:55.366586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:55.366613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:55.366618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:07:55.366622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:07:55.366626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:07:55.366630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:07:55.366634 | orchestrator | 2026-02-04 02:07:55.366638 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-04 02:07:55.366642 | orchestrator | Wednesday 04 February 2026 02:07:49 +0000 (0:00:03.049) 0:00:21.066 **** 2026-02-04 02:07:55.366650 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 02:07:55.366655 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 02:07:55.366659 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 02:07:55.366663 | orchestrator | 2026-02-04 02:07:55.366667 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-04 02:07:55.366671 | orchestrator | Wednesday 04 February 2026 02:07:51 +0000 (0:00:01.845) 0:00:22.911 **** 2026-02-04 02:07:55.366674 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 02:07:55.366679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 02:07:55.366682 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 02:07:55.366686 | orchestrator | 2026-02-04 02:07:55.366690 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-04 02:07:55.366694 | orchestrator | Wednesday 04 February 2026 02:07:54 +0000 (0:00:02.946) 0:00:25.858 **** 2026-02-04 02:07:55.366698 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:07:55.366703 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:07:55.366707 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:07:55.366711 | orchestrator | 2026-02-04 02:07:55.366720 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-04 02:08:06.921826 | orchestrator | Wednesday 04 February 2026 02:07:55 +0000 (0:00:00.687) 0:00:26.545 **** 2026-02-04 02:08:06.922093 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 02:08:06.922145 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 02:08:06.922165 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 02:08:06.922184 | orchestrator | 2026-02-04 02:08:06.922204 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-04 02:08:06.922224 | orchestrator | Wednesday 04 February 2026 02:07:57 +0000 (0:00:02.224) 0:00:28.769 **** 2026-02-04 02:08:06.922280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 02:08:06.922300 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 02:08:06.922315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 02:08:06.922332 | orchestrator | 2026-02-04 02:08:06.922350 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-04 02:08:06.922371 | orchestrator | Wednesday 04 February 2026 02:07:59 +0000 (0:00:02.171) 0:00:30.941 **** 2026-02-04 02:08:06.922392 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-04 02:08:06.922413 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-04 02:08:06.922433 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-04 02:08:06.922453 | orchestrator | 2026-02-04 02:08:06.922492 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-04 02:08:06.922511 | orchestrator | Wednesday 04 February 2026 02:08:01 +0000 (0:00:01.401) 0:00:32.342 **** 2026-02-04 02:08:06.922532 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-04 02:08:06.922551 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-04 02:08:06.922570 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-04 02:08:06.922588 | orchestrator | 2026-02-04 02:08:06.922630 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-04 02:08:06.922642 | orchestrator | Wednesday 04 February 2026 02:08:02 +0000 (0:00:01.385) 0:00:33.728 **** 2026-02-04 02:08:06.922653 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:08:06.922664 | orchestrator | 2026-02-04 02:08:06.922675 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-04 02:08:06.922686 | orchestrator | Wednesday 04 February 2026 02:08:03 +0000 (0:00:00.584) 0:00:34.312 **** 2026-02-04 02:08:06.922700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 02:08:06.922715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 02:08:06.922734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 02:08:06.922775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:08:06.922794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:08:06.922815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:08:06.922849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:08:06.922870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:08:06.922883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:08:06.922894 | orchestrator | 2026-02-04 02:08:06.922906 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-04 02:08:06.922955 | orchestrator | Wednesday 04 February 2026 02:08:06 +0000 (0:00:03.148) 0:00:37.461 **** 2026-02-04 02:08:06.922985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:08:07.770654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:07.770749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:07.770785 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:07.770795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:08:07.770800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:07.770805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:07.770809 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:07.770814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:08:07.770843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:07.770848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:07.770858 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:07.770862 | orchestrator | 2026-02-04 02:08:07.770867 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-04 02:08:07.770872 | orchestrator | Wednesday 04 February 2026 02:08:06 +0000 (0:00:00.644) 0:00:38.105 **** 2026-02-04 02:08:07.770878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:08:07.770882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:07.770886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:07.770891 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:07.770895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:08:07.770961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:08.741536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:08.741626 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:08.741636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:08:08.741643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:08.741648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:08.741653 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:08.741658 | orchestrator | 2026-02-04 02:08:08.741664 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-04 02:08:08.741670 | orchestrator | Wednesday 04 February 2026 02:08:07 +0000 (0:00:00.845) 0:00:38.951 **** 2026-02-04 02:08:08.741675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:08:08.741681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:08.741697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:08.741706 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:08.741711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:08:08.741716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:08.741721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:08.741726 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:08.741731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:08:08.741749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:08.741757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:08.741769 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:10.172433 | orchestrator | 2026-02-04 02:08:10.172528 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-04 02:08:10.172540 | orchestrator | Wednesday 04 February 2026 02:08:08 +0000 (0:00:00.959) 0:00:39.910 **** 2026-02-04 02:08:10.172551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:08:10.172562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:10.172570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:10.172577 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:10.172585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:08:10.172592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:10.172614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:10.172639 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:10.172663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:08:10.172671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:10.172677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:10.172683 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:10.172689 | orchestrator | 2026-02-04 02:08:10.172695 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-04 02:08:10.172701 | orchestrator | Wednesday 04 February 2026 02:08:09 +0000 (0:00:00.629) 0:00:40.539 **** 2026-02-04 02:08:10.172706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:08:10.172712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:10.172740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:10.172747 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:10.172759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:08:11.359305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:11.359395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:11.359409 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:11.359421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:08:11.359431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:11.359440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:11.359471 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:11.359481 | orchestrator | 2026-02-04 02:08:11.359490 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-04 02:08:11.359501 | orchestrator | Wednesday 04 February 2026 02:08:10 +0000 (0:00:00.816) 0:00:41.355 **** 2026-02-04 02:08:11.359523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:08:11.359551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:11.359562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:11.359571 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:11.359580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:08:11.359590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:11.359605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:11.359615 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:11.359628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:08:11.359643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:12.863437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:12.863554 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:12.863586 | orchestrator | 2026-02-04 02:08:12.863606 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-04 02:08:12.863627 | orchestrator | Wednesday 04 February 2026 02:08:11 +0000 (0:00:01.173) 0:00:42.529 **** 2026-02-04 02:08:12.863648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:08:12.863670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:12.863720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:12.863733 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:12.863745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:08:12.863786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:12.863839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:12.863859 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:12.863879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:08:12.863899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:12.863965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:12.863988 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:12.864009 | orchestrator | 2026-02-04 02:08:12.864029 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-04 02:08:12.864045 | orchestrator | Wednesday 04 February 2026 02:08:11 +0000 (0:00:00.610) 0:00:43.139 **** 2026-02-04 02:08:12.864056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 02:08:12.864068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:12.864098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:19.387282 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:19.387400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 02:08:19.387419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:19.387457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:19.387468 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:19.387478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 02:08:19.387503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 02:08:19.387513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 02:08:19.387522 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:19.387531 | orchestrator | 2026-02-04 02:08:19.387541 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-04 02:08:19.387552 | orchestrator | Wednesday 04 February 2026 02:08:12 +0000 (0:00:00.904) 0:00:44.044 **** 2026-02-04 02:08:19.387561 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 02:08:19.387587 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 02:08:19.387596 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 02:08:19.387604 | orchestrator | 2026-02-04 02:08:19.387614 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-04 02:08:19.387624 | orchestrator | Wednesday 04 February 2026 02:08:14 +0000 (0:00:01.721) 0:00:45.766 **** 2026-02-04 02:08:19.387634 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 02:08:19.387643 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 02:08:19.387652 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 02:08:19.387661 | orchestrator | 2026-02-04 02:08:19.387678 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-04 02:08:19.387688 | orchestrator | Wednesday 04 February 2026 02:08:16 +0000 (0:00:01.682) 0:00:47.448 **** 2026-02-04 02:08:19.387696 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 02:08:19.387705 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 02:08:19.387715 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 02:08:19.387724 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 02:08:19.387732 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:19.387741 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 02:08:19.387750 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:19.387759 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 02:08:19.387768 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:19.387777 | orchestrator | 2026-02-04 02:08:19.387786 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-04 02:08:19.387795 | orchestrator | Wednesday 04 February 2026 02:08:17 +0000 (0:00:00.838) 0:00:48.286 **** 2026-02-04 02:08:19.387804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 02:08:19.387815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 02:08:19.387829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 02:08:19.387846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:08:23.721738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:08:23.721808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 02:08:23.721815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:08:23.721820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:08:23.721824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 02:08:23.721828 | orchestrator | 2026-02-04 02:08:23.721846 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-04 02:08:23.721852 | orchestrator | Wednesday 04 February 2026 02:08:19 +0000 (0:00:02.285) 0:00:50.571 **** 2026-02-04 02:08:23.721856 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:08:23.721860 | orchestrator | 2026-02-04 02:08:23.721864 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-04 02:08:23.721868 | orchestrator | Wednesday 04 February 2026 02:08:20 +0000 (0:00:00.873) 0:00:51.445 **** 2026-02-04 02:08:23.721883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 02:08:23.721985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 02:08:23.721998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:23.722004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 02:08:23.722010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 02:08:23.722061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 02:08:23.722069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:23.722092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 02:08:24.408723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 02:08:24.408797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 02:08:24.408805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:24.408825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 02:08:24.408832 | orchestrator | 2026-02-04 02:08:24.408838 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-04 02:08:24.408845 | orchestrator | Wednesday 04 February 2026 02:08:23 +0000 (0:00:03.458) 0:00:54.903 **** 2026-02-04 02:08:24.408851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 02:08:24.408883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 02:08:24.408890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:24.408895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 02:08:24.408900 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:24.408907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 02:08:24.408963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 02:08:24.408975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:24.408985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 02:08:33.128300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.128401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 02:08:33.128417 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:33.128429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.128439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.128472 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:33.128482 | orchestrator | 2026-02-04 02:08:33.128492 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-04 02:08:33.128502 | orchestrator | Wednesday 04 February 2026 02:08:24 +0000 (0:00:00.680) 0:00:55.584 **** 2026-02-04 02:08:33.128512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 02:08:33.128524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 02:08:33.128535 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:33.128561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 02:08:33.128569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 02:08:33.128577 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:33.128586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-04 02:08:33.128595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-04 02:08:33.128603 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:33.128610 | orchestrator | 2026-02-04 02:08:33.128618 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-04 02:08:33.128642 | orchestrator | Wednesday 04 February 2026 02:08:25 +0000 (0:00:01.274) 0:00:56.858 **** 2026-02-04 02:08:33.128666 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:08:33.128675 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:08:33.128683 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:08:33.128693 | orchestrator | 2026-02-04 02:08:33.128705 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-04 02:08:33.128717 | orchestrator | Wednesday 04 February 2026 02:08:26 +0000 (0:00:01.259) 0:00:58.118 **** 2026-02-04 02:08:33.128726 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:08:33.128734 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:08:33.128742 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:08:33.128750 | orchestrator | 2026-02-04 02:08:33.128757 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-04 02:08:33.128767 | orchestrator | Wednesday 04 February 2026 02:08:29 +0000 (0:00:02.072) 0:01:00.191 **** 2026-02-04 02:08:33.128778 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:08:33.128790 | orchestrator | 2026-02-04 02:08:33.128801 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-04 02:08:33.128813 | orchestrator | Wednesday 04 February 2026 02:08:29 +0000 (0:00:00.671) 0:01:00.862 **** 2026-02-04 02:08:33.128827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 02:08:33.128860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.128873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.128885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 02:08:33.128906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.827095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.827197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 02:08:33.827216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.827223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.827228 | orchestrator | 2026-02-04 02:08:33.827235 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-04 02:08:33.827241 | orchestrator | Wednesday 04 February 2026 02:08:33 +0000 (0:00:03.445) 0:01:04.308 **** 2026-02-04 02:08:33.827247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 02:08:33.827266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.827279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.827284 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:33.827294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 02:08:33.827299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.827304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:08:33.827309 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:33.827318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 02:08:43.914381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 02:08:43.914476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:08:43.914493 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:43.914509 | orchestrator | 2026-02-04 02:08:43.914520 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-04 02:08:43.914531 | orchestrator | Wednesday 04 February 2026 02:08:33 +0000 (0:00:00.696) 0:01:05.005 **** 2026-02-04 02:08:43.914558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 02:08:43.914571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 02:08:43.914583 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:43.914593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 02:08:43.914603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 02:08:43.914614 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:43.914623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 02:08:43.914634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-04 02:08:43.914644 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:43.914654 | orchestrator | 2026-02-04 02:08:43.914662 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-04 02:08:43.914668 | orchestrator | Wednesday 04 February 2026 02:08:34 +0000 (0:00:00.946) 0:01:05.952 **** 2026-02-04 02:08:43.914673 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:08:43.914679 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:08:43.914685 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:08:43.914690 | orchestrator | 2026-02-04 02:08:43.914696 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-04 02:08:43.914701 | orchestrator | Wednesday 04 February 2026 02:08:36 +0000 (0:00:01.529) 0:01:07.481 **** 2026-02-04 02:08:43.914728 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:08:43.914738 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:08:43.914746 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:08:43.914755 | orchestrator | 2026-02-04 02:08:43.914763 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-04 02:08:43.914773 | orchestrator | Wednesday 04 February 2026 02:08:38 +0000 (0:00:02.077) 0:01:09.558 **** 2026-02-04 02:08:43.914782 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:43.914791 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:43.914800 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:43.914810 | orchestrator | 2026-02-04 02:08:43.914818 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-04 02:08:43.914824 | orchestrator | Wednesday 04 February 2026 02:08:38 +0000 (0:00:00.345) 0:01:09.904 **** 2026-02-04 02:08:43.914829 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:08:43.914835 | orchestrator | 2026-02-04 02:08:43.914840 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-04 02:08:43.914859 | orchestrator | Wednesday 04 February 2026 02:08:39 +0000 (0:00:00.729) 0:01:10.633 **** 2026-02-04 02:08:43.914867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 02:08:43.914879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 02:08:43.914886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 02:08:43.914892 | orchestrator | 2026-02-04 02:08:43.914897 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-04 02:08:43.914903 | orchestrator | Wednesday 04 February 2026 02:08:42 +0000 (0:00:02.962) 0:01:13.595 **** 2026-02-04 02:08:43.914970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 02:08:43.914977 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:43.914991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 02:08:52.095459 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:52.095557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 02:08:52.095574 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:52.095582 | orchestrator | 2026-02-04 02:08:52.095589 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-04 02:08:52.095597 | orchestrator | Wednesday 04 February 2026 02:08:43 +0000 (0:00:01.499) 0:01:15.095 **** 2026-02-04 02:08:52.095619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 02:08:52.095629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 02:08:52.095637 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:52.095642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 02:08:52.095672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 02:08:52.095679 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:52.095685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 02:08:52.095691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 02:08:52.095697 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:52.095704 | orchestrator | 2026-02-04 02:08:52.095710 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-04 02:08:52.095716 | orchestrator | Wednesday 04 February 2026 02:08:45 +0000 (0:00:01.738) 0:01:16.834 **** 2026-02-04 02:08:52.095721 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:52.095727 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:52.095732 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:52.095737 | orchestrator | 2026-02-04 02:08:52.095746 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-04 02:08:52.095769 | orchestrator | Wednesday 04 February 2026 02:08:46 +0000 (0:00:00.469) 0:01:17.304 **** 2026-02-04 02:08:52.095776 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:52.095782 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:52.095788 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:52.095794 | orchestrator | 2026-02-04 02:08:52.095801 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-04 02:08:52.095806 | orchestrator | Wednesday 04 February 2026 02:08:47 +0000 (0:00:01.388) 0:01:18.693 **** 2026-02-04 02:08:52.095812 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:08:52.095818 | orchestrator | 2026-02-04 02:08:52.095824 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-04 02:08:52.095830 | orchestrator | Wednesday 04 February 2026 02:08:48 +0000 (0:00:01.034) 0:01:19.728 **** 2026-02-04 02:08:52.095843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 02:08:52.095863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.095872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.095879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.095905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 02:08:52.825659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 02:08:52.825791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825871 | orchestrator | 2026-02-04 02:08:52.825880 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-04 02:08:52.825887 | orchestrator | Wednesday 04 February 2026 02:08:52 +0000 (0:00:03.640) 0:01:23.368 **** 2026-02-04 02:08:52.825895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 02:08:52.825902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 02:08:52.825974 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:52.825995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 02:08:59.766119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:08:59.766191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 02:08:59.766200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 02:08:59.766204 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:59.766211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 02:08:59.766215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:08:59.766258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 02:08:59.766263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 02:08:59.766267 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:59.766271 | orchestrator | 2026-02-04 02:08:59.766276 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-04 02:08:59.766282 | orchestrator | Wednesday 04 February 2026 02:08:52 +0000 (0:00:00.784) 0:01:24.153 **** 2026-02-04 02:08:59.766286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 02:08:59.766292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 02:08:59.766297 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:59.766301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 02:08:59.766305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 02:08:59.766309 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:59.766312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 02:08:59.766316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-04 02:08:59.766320 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:59.766324 | orchestrator | 2026-02-04 02:08:59.766328 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-04 02:08:59.766332 | orchestrator | Wednesday 04 February 2026 02:08:54 +0000 (0:00:01.438) 0:01:25.591 **** 2026-02-04 02:08:59.766336 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:08:59.766343 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:08:59.766347 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:08:59.766351 | orchestrator | 2026-02-04 02:08:59.766355 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-04 02:08:59.766358 | orchestrator | Wednesday 04 February 2026 02:08:55 +0000 (0:00:01.354) 0:01:26.946 **** 2026-02-04 02:08:59.766362 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:08:59.766366 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:08:59.766370 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:08:59.766374 | orchestrator | 2026-02-04 02:08:59.766378 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-04 02:08:59.766382 | orchestrator | Wednesday 04 February 2026 02:08:58 +0000 (0:00:02.262) 0:01:29.209 **** 2026-02-04 02:08:59.766386 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:59.766389 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:59.766393 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:59.766397 | orchestrator | 2026-02-04 02:08:59.766401 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-04 02:08:59.766404 | orchestrator | Wednesday 04 February 2026 02:08:58 +0000 (0:00:00.343) 0:01:29.552 **** 2026-02-04 02:08:59.766408 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:08:59.766412 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:08:59.766416 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:08:59.766420 | orchestrator | 2026-02-04 02:08:59.766423 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-04 02:08:59.766427 | orchestrator | Wednesday 04 February 2026 02:08:58 +0000 (0:00:00.329) 0:01:29.882 **** 2026-02-04 02:08:59.766431 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:08:59.766435 | orchestrator | 2026-02-04 02:08:59.766439 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-04 02:08:59.766445 | orchestrator | Wednesday 04 February 2026 02:08:59 +0000 (0:00:01.066) 0:01:30.949 **** 2026-02-04 02:09:03.353153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 02:09:03.353260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 02:09:03.353277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 02:09:03.353317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 02:09:03.353330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 02:09:03.353375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 02:09:03.353389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:09:03.353401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 02:09:03.353413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 02:09:03.353432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 02:09:03.353444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 02:09:03.353468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 02:09:04.207549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.207828 | orchestrator | 2026-02-04 02:09:04.207842 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-04 02:09:04.207854 | orchestrator | Wednesday 04 February 2026 02:09:03 +0000 (0:00:03.801) 0:01:34.751 **** 2026-02-04 02:09:04.207873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 02:09:04.207894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 02:09:04.207967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.208000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 02:09:04.693189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 02:09:04.693210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693662 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:04.693676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 02:09:04.693723 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:04.693730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 02:09:04.693738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 02:09:04.693751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 02:09:16.182979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 02:09:16.183050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 02:09:16.183069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:09:16.183074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 02:09:16.183079 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:16.183085 | orchestrator | 2026-02-04 02:09:16.183091 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-04 02:09:16.183096 | orchestrator | Wednesday 04 February 2026 02:09:04 +0000 (0:00:01.124) 0:01:35.875 **** 2026-02-04 02:09:16.183101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 02:09:16.183106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 02:09:16.183112 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:16.183115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 02:09:16.183119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 02:09:16.183123 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:16.183127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-04 02:09:16.183148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-04 02:09:16.183152 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:16.183156 | orchestrator | 2026-02-04 02:09:16.183160 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-04 02:09:16.183173 | orchestrator | Wednesday 04 February 2026 02:09:06 +0000 (0:00:01.499) 0:01:37.375 **** 2026-02-04 02:09:16.183178 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:09:16.183182 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:09:16.183186 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:09:16.183190 | orchestrator | 2026-02-04 02:09:16.183193 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-04 02:09:16.183197 | orchestrator | Wednesday 04 February 2026 02:09:07 +0000 (0:00:01.263) 0:01:38.638 **** 2026-02-04 02:09:16.183201 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:09:16.183205 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:09:16.183209 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:09:16.183212 | orchestrator | 2026-02-04 02:09:16.183216 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-04 02:09:16.183220 | orchestrator | Wednesday 04 February 2026 02:09:09 +0000 (0:00:01.999) 0:01:40.638 **** 2026-02-04 02:09:16.183224 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:16.183228 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:16.183231 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:16.183235 | orchestrator | 2026-02-04 02:09:16.183239 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-04 02:09:16.183243 | orchestrator | Wednesday 04 February 2026 02:09:09 +0000 (0:00:00.336) 0:01:40.975 **** 2026-02-04 02:09:16.183247 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:09:16.183250 | orchestrator | 2026-02-04 02:09:16.183254 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-04 02:09:16.183258 | orchestrator | Wednesday 04 February 2026 02:09:10 +0000 (0:00:01.168) 0:01:42.143 **** 2026-02-04 02:09:16.183267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 02:09:16.183276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 02:09:19.455994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 02:09:19.456076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 02:09:19.456118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 02:09:19.456127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 02:09:19.456139 | orchestrator | 2026-02-04 02:09:19.456146 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-04 02:09:19.456153 | orchestrator | Wednesday 04 February 2026 02:09:16 +0000 (0:00:05.353) 0:01:47.497 **** 2026-02-04 02:09:19.456168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 02:09:19.579998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 02:09:19.580159 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:19.580193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 02:09:19.580267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 02:09:19.580308 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:19.580333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 02:09:19.580379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 02:09:32.585029 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:32.585108 | orchestrator | 2026-02-04 02:09:32.585116 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-04 02:09:32.585123 | orchestrator | Wednesday 04 February 2026 02:09:19 +0000 (0:00:03.265) 0:01:50.763 **** 2026-02-04 02:09:32.585131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 02:09:32.585139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 02:09:32.585145 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:32.585151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 02:09:32.585157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 02:09:32.585162 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:32.585168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 02:09:32.585186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 02:09:32.585192 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:32.585197 | orchestrator | 2026-02-04 02:09:32.585202 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-04 02:09:32.585208 | orchestrator | Wednesday 04 February 2026 02:09:24 +0000 (0:00:04.719) 0:01:55.482 **** 2026-02-04 02:09:32.585231 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:09:32.585237 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:09:32.585242 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:09:32.585247 | orchestrator | 2026-02-04 02:09:32.585253 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-04 02:09:32.585258 | orchestrator | Wednesday 04 February 2026 02:09:25 +0000 (0:00:01.336) 0:01:56.818 **** 2026-02-04 02:09:32.585263 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:09:32.585268 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:09:32.585273 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:09:32.585278 | orchestrator | 2026-02-04 02:09:32.585283 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-04 02:09:32.585299 | orchestrator | Wednesday 04 February 2026 02:09:27 +0000 (0:00:02.049) 0:01:58.867 **** 2026-02-04 02:09:32.585305 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:32.585310 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:32.585315 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:32.585320 | orchestrator | 2026-02-04 02:09:32.585325 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-04 02:09:32.585334 | orchestrator | Wednesday 04 February 2026 02:09:28 +0000 (0:00:00.354) 0:01:59.222 **** 2026-02-04 02:09:32.585342 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:09:32.585351 | orchestrator | 2026-02-04 02:09:32.585359 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-04 02:09:32.585367 | orchestrator | Wednesday 04 February 2026 02:09:29 +0000 (0:00:01.223) 0:02:00.446 **** 2026-02-04 02:09:32.585376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 02:09:32.585386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 02:09:32.585396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 02:09:32.585405 | orchestrator | 2026-02-04 02:09:32.585414 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-04 02:09:32.585431 | orchestrator | Wednesday 04 February 2026 02:09:32 +0000 (0:00:03.079) 0:02:03.526 **** 2026-02-04 02:09:32.585440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 02:09:32.585457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 02:09:42.136252 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:42.136333 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:42.136387 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 02:09:42.136396 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:42.136400 | orchestrator | 2026-02-04 02:09:42.136406 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-04 02:09:42.136411 | orchestrator | Wednesday 04 February 2026 02:09:32 +0000 (0:00:00.475) 0:02:04.001 **** 2026-02-04 02:09:42.136415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 02:09:42.136421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 02:09:42.136427 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:42.136431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 02:09:42.136435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 02:09:42.136439 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:42.136443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-04 02:09:42.136446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-04 02:09:42.136462 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:42.136466 | orchestrator | 2026-02-04 02:09:42.136470 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-04 02:09:42.136474 | orchestrator | Wednesday 04 February 2026 02:09:33 +0000 (0:00:00.967) 0:02:04.969 **** 2026-02-04 02:09:42.136477 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:09:42.136481 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:09:42.136485 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:09:42.136489 | orchestrator | 2026-02-04 02:09:42.136493 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-04 02:09:42.136497 | orchestrator | Wednesday 04 February 2026 02:09:35 +0000 (0:00:01.276) 0:02:06.245 **** 2026-02-04 02:09:42.136500 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:09:42.136504 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:09:42.136508 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:09:42.136512 | orchestrator | 2026-02-04 02:09:42.136516 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-04 02:09:42.136522 | orchestrator | Wednesday 04 February 2026 02:09:37 +0000 (0:00:02.126) 0:02:08.371 **** 2026-02-04 02:09:42.136526 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:42.136530 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:42.136533 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:42.136537 | orchestrator | 2026-02-04 02:09:42.136541 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-04 02:09:42.136545 | orchestrator | Wednesday 04 February 2026 02:09:37 +0000 (0:00:00.342) 0:02:08.714 **** 2026-02-04 02:09:42.136549 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:09:42.136552 | orchestrator | 2026-02-04 02:09:42.136556 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-04 02:09:42.136560 | orchestrator | Wednesday 04 February 2026 02:09:38 +0000 (0:00:01.214) 0:02:09.929 **** 2026-02-04 02:09:42.136577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 02:09:42.136590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 02:09:42.136599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 02:09:43.811017 | orchestrator | 2026-02-04 02:09:43.811135 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-04 02:09:43.811151 | orchestrator | Wednesday 04 February 2026 02:09:42 +0000 (0:00:03.389) 0:02:13.318 **** 2026-02-04 02:09:43.811187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 02:09:43.811204 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:43.811239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 02:09:43.811273 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:43.811291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 02:09:43.811302 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:43.811312 | orchestrator | 2026-02-04 02:09:43.811322 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-04 02:09:43.811332 | orchestrator | Wednesday 04 February 2026 02:09:42 +0000 (0:00:00.680) 0:02:13.999 **** 2026-02-04 02:09:43.811343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 02:09:43.811363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 02:09:43.811376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 02:09:43.811394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 02:09:53.384635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 02:09:53.384718 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:53.384729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 02:09:53.384739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 02:09:53.384761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 02:09:53.384768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 02:09:53.384775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 02:09:53.384780 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:53.384786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 02:09:53.384792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 02:09:53.384798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-04 02:09:53.384822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 02:09:53.384828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 02:09:53.384833 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:53.384839 | orchestrator | 2026-02-04 02:09:53.384846 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-04 02:09:53.384853 | orchestrator | Wednesday 04 February 2026 02:09:43 +0000 (0:00:00.994) 0:02:14.994 **** 2026-02-04 02:09:53.384858 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:09:53.384864 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:09:53.384905 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:09:53.384911 | orchestrator | 2026-02-04 02:09:53.384916 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-04 02:09:53.384922 | orchestrator | Wednesday 04 February 2026 02:09:45 +0000 (0:00:01.634) 0:02:16.628 **** 2026-02-04 02:09:53.384928 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:09:53.384934 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:09:53.384940 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:09:53.384945 | orchestrator | 2026-02-04 02:09:53.384950 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-04 02:09:53.384956 | orchestrator | Wednesday 04 February 2026 02:09:47 +0000 (0:00:02.164) 0:02:18.792 **** 2026-02-04 02:09:53.384962 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:53.384967 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:53.384984 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:53.384990 | orchestrator | 2026-02-04 02:09:53.384996 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-04 02:09:53.385001 | orchestrator | Wednesday 04 February 2026 02:09:47 +0000 (0:00:00.336) 0:02:19.129 **** 2026-02-04 02:09:53.385007 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:53.385012 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:53.385018 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:09:53.385023 | orchestrator | 2026-02-04 02:09:53.385029 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-04 02:09:53.385034 | orchestrator | Wednesday 04 February 2026 02:09:48 +0000 (0:00:00.352) 0:02:19.481 **** 2026-02-04 02:09:53.385040 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:09:53.385045 | orchestrator | 2026-02-04 02:09:53.385051 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-04 02:09:53.385056 | orchestrator | Wednesday 04 February 2026 02:09:49 +0000 (0:00:01.207) 0:02:20.689 **** 2026-02-04 02:09:53.385068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:09:53.385083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:09:53.385157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:09:53.385166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:09:53.385178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:09:54.026509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:09:54.026583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:09:54.026608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:09:54.026613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:09:54.026618 | orchestrator | 2026-02-04 02:09:54.026623 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-04 02:09:54.026628 | orchestrator | Wednesday 04 February 2026 02:09:53 +0000 (0:00:03.877) 0:02:24.566 **** 2026-02-04 02:09:54.026642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:09:54.026650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:09:54.026655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:09:54.026669 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:09:54.026674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:09:54.026679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:09:54.026683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:09:54.026687 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:09:54.026697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:10:03.737403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:10:03.737495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:10:03.737507 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:03.737516 | orchestrator | 2026-02-04 02:10:03.737524 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-04 02:10:03.737535 | orchestrator | Wednesday 04 February 2026 02:09:54 +0000 (0:00:00.637) 0:02:25.204 **** 2026-02-04 02:10:03.737544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 02:10:03.737554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 02:10:03.737561 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:03.737568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 02:10:03.737574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 02:10:03.737584 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:03.737591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 02:10:03.737597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-04 02:10:03.737604 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:03.737610 | orchestrator | 2026-02-04 02:10:03.737615 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-04 02:10:03.737622 | orchestrator | Wednesday 04 February 2026 02:09:55 +0000 (0:00:01.171) 0:02:26.375 **** 2026-02-04 02:10:03.737630 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:10:03.737638 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:10:03.737666 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:10:03.737672 | orchestrator | 2026-02-04 02:10:03.737678 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-04 02:10:03.737684 | orchestrator | Wednesday 04 February 2026 02:09:56 +0000 (0:00:01.375) 0:02:27.751 **** 2026-02-04 02:10:03.737690 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:10:03.737696 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:10:03.737702 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:10:03.737713 | orchestrator | 2026-02-04 02:10:03.737719 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-04 02:10:03.737725 | orchestrator | Wednesday 04 February 2026 02:09:58 +0000 (0:00:02.087) 0:02:29.838 **** 2026-02-04 02:10:03.737731 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:03.737749 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:03.737759 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:03.737766 | orchestrator | 2026-02-04 02:10:03.737772 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-04 02:10:03.737793 | orchestrator | Wednesday 04 February 2026 02:09:58 +0000 (0:00:00.319) 0:02:30.158 **** 2026-02-04 02:10:03.737800 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:10:03.737810 | orchestrator | 2026-02-04 02:10:03.737816 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-04 02:10:03.737822 | orchestrator | Wednesday 04 February 2026 02:10:00 +0000 (0:00:01.339) 0:02:31.497 **** 2026-02-04 02:10:03.737830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 02:10:03.737840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:10:03.737847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 02:10:03.737862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:10:03.737919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 02:10:09.322517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:10:09.322624 | orchestrator | 2026-02-04 02:10:09.322638 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-04 02:10:09.322651 | orchestrator | Wednesday 04 February 2026 02:10:03 +0000 (0:00:03.416) 0:02:34.913 **** 2026-02-04 02:10:09.322663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 02:10:09.322720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:10:09.322754 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:09.322771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 02:10:09.322802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:10:09.322814 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:09.322825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 02:10:09.322836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:10:09.322853 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:09.322863 | orchestrator | 2026-02-04 02:10:09.322922 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-04 02:10:09.322934 | orchestrator | Wednesday 04 February 2026 02:10:04 +0000 (0:00:00.741) 0:02:35.655 **** 2026-02-04 02:10:09.322944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 02:10:09.322956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 02:10:09.322968 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:09.322979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 02:10:09.322988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 02:10:09.322998 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:09.323007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-04 02:10:09.323017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-04 02:10:09.323027 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:09.323037 | orchestrator | 2026-02-04 02:10:09.323051 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-04 02:10:09.323062 | orchestrator | Wednesday 04 February 2026 02:10:05 +0000 (0:00:01.029) 0:02:36.684 **** 2026-02-04 02:10:09.323072 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:10:09.323082 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:10:09.323092 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:10:09.323102 | orchestrator | 2026-02-04 02:10:09.323112 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-04 02:10:09.323122 | orchestrator | Wednesday 04 February 2026 02:10:07 +0000 (0:00:01.628) 0:02:38.312 **** 2026-02-04 02:10:09.323131 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:10:09.323141 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:10:09.323151 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:10:09.323161 | orchestrator | 2026-02-04 02:10:09.323171 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-04 02:10:09.323187 | orchestrator | Wednesday 04 February 2026 02:10:09 +0000 (0:00:02.183) 0:02:40.496 **** 2026-02-04 02:10:14.098366 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:10:14.098458 | orchestrator | 2026-02-04 02:10:14.098471 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-04 02:10:14.098481 | orchestrator | Wednesday 04 February 2026 02:10:10 +0000 (0:00:01.188) 0:02:41.684 **** 2026-02-04 02:10:14.098500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 02:10:14.098532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:10:14.098542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 02:10:14.098550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 02:10:14.098571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 02:10:14.098595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:10:14.098600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 02:10:14.098609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 02:10:14.098613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 02:10:14.098618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:10:14.098625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 02:10:14.098634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161474 | orchestrator | 2026-02-04 02:10:15.161573 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-04 02:10:15.161589 | orchestrator | Wednesday 04 February 2026 02:10:14 +0000 (0:00:03.678) 0:02:45.362 **** 2026-02-04 02:10:15.161626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 02:10:15.161641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161692 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:15.161726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 02:10:15.161757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161796 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:15.161806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 02:10:15.161817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 02:10:15.161850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 02:10:27.144342 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:27.144435 | orchestrator | 2026-02-04 02:10:27.144448 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-04 02:10:27.144456 | orchestrator | Wednesday 04 February 2026 02:10:15 +0000 (0:00:01.070) 0:02:46.433 **** 2026-02-04 02:10:27.144463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 02:10:27.144471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 02:10:27.144480 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:27.144487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 02:10:27.144494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 02:10:27.144500 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:27.144506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-04 02:10:27.144513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-04 02:10:27.144519 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:27.144525 | orchestrator | 2026-02-04 02:10:27.144532 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-04 02:10:27.144539 | orchestrator | Wednesday 04 February 2026 02:10:16 +0000 (0:00:01.002) 0:02:47.435 **** 2026-02-04 02:10:27.144545 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:10:27.144552 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:10:27.144558 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:10:27.144564 | orchestrator | 2026-02-04 02:10:27.144570 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-04 02:10:27.144576 | orchestrator | Wednesday 04 February 2026 02:10:17 +0000 (0:00:01.300) 0:02:48.736 **** 2026-02-04 02:10:27.144583 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:10:27.144589 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:10:27.144595 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:10:27.144601 | orchestrator | 2026-02-04 02:10:27.144608 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-04 02:10:27.144614 | orchestrator | Wednesday 04 February 2026 02:10:19 +0000 (0:00:02.204) 0:02:50.940 **** 2026-02-04 02:10:27.144620 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:10:27.144626 | orchestrator | 2026-02-04 02:10:27.144632 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-04 02:10:27.144638 | orchestrator | Wednesday 04 February 2026 02:10:21 +0000 (0:00:01.478) 0:02:52.419 **** 2026-02-04 02:10:27.144645 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 02:10:27.144651 | orchestrator | 2026-02-04 02:10:27.144657 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-04 02:10:27.144688 | orchestrator | Wednesday 04 February 2026 02:10:24 +0000 (0:00:03.130) 0:02:55.549 **** 2026-02-04 02:10:27.144733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:10:27.144742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 02:10:27.144749 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:27.144758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:10:27.144771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 02:10:27.144776 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:27.144788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:10:29.664828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 02:10:29.665002 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:29.665021 | orchestrator | 2026-02-04 02:10:29.665031 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-04 02:10:29.665041 | orchestrator | Wednesday 04 February 2026 02:10:27 +0000 (0:00:02.768) 0:02:58.317 **** 2026-02-04 02:10:29.665089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:10:29.665101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 02:10:29.665110 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:29.665137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:10:29.665165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 02:10:29.665174 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:29.665183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:10:29.665197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 02:10:40.191726 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:40.191813 | orchestrator | 2026-02-04 02:10:40.191824 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-04 02:10:40.191834 | orchestrator | Wednesday 04 February 2026 02:10:29 +0000 (0:00:02.524) 0:03:00.841 **** 2026-02-04 02:10:40.191843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 02:10:40.191876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 02:10:40.191946 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:40.191954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 02:10:40.191961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 02:10:40.191968 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:40.191975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 02:10:40.191982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 02:10:40.191989 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:40.191996 | orchestrator | 2026-02-04 02:10:40.192003 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-04 02:10:40.192010 | orchestrator | Wednesday 04 February 2026 02:10:32 +0000 (0:00:03.196) 0:03:04.038 **** 2026-02-04 02:10:40.192017 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:10:40.192042 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:10:40.192050 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:10:40.192057 | orchestrator | 2026-02-04 02:10:40.192064 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-04 02:10:40.192070 | orchestrator | Wednesday 04 February 2026 02:10:35 +0000 (0:00:02.171) 0:03:06.210 **** 2026-02-04 02:10:40.192077 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:40.192084 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:40.192091 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:40.192097 | orchestrator | 2026-02-04 02:10:40.192104 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-04 02:10:40.192111 | orchestrator | Wednesday 04 February 2026 02:10:36 +0000 (0:00:01.575) 0:03:07.786 **** 2026-02-04 02:10:40.192117 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:40.192124 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:40.192131 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:40.192137 | orchestrator | 2026-02-04 02:10:40.192144 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-04 02:10:40.192151 | orchestrator | Wednesday 04 February 2026 02:10:36 +0000 (0:00:00.327) 0:03:08.113 **** 2026-02-04 02:10:40.192157 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:10:40.192165 | orchestrator | 2026-02-04 02:10:40.192171 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-04 02:10:40.192178 | orchestrator | Wednesday 04 February 2026 02:10:38 +0000 (0:00:01.538) 0:03:09.651 **** 2026-02-04 02:10:40.192190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 02:10:40.192201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 02:10:40.192208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 02:10:40.192215 | orchestrator | 2026-02-04 02:10:40.192222 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-04 02:10:40.192235 | orchestrator | Wednesday 04 February 2026 02:10:39 +0000 (0:00:01.498) 0:03:11.150 **** 2026-02-04 02:10:40.192247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 02:10:50.080188 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:50.080287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 02:10:50.080299 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:50.080304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 02:10:50.080309 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:50.080314 | orchestrator | 2026-02-04 02:10:50.080319 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-04 02:10:50.080324 | orchestrator | Wednesday 04 February 2026 02:10:40 +0000 (0:00:00.419) 0:03:11.570 **** 2026-02-04 02:10:50.080330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 02:10:50.080335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 02:10:50.080339 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:50.080343 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:50.080347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 02:10:50.080368 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:50.080372 | orchestrator | 2026-02-04 02:10:50.080405 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-04 02:10:50.080409 | orchestrator | Wednesday 04 February 2026 02:10:41 +0000 (0:00:00.967) 0:03:12.538 **** 2026-02-04 02:10:50.080413 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:50.080417 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:50.080420 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:50.080424 | orchestrator | 2026-02-04 02:10:50.080428 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-04 02:10:50.080432 | orchestrator | Wednesday 04 February 2026 02:10:41 +0000 (0:00:00.489) 0:03:13.027 **** 2026-02-04 02:10:50.080435 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:50.080439 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:50.080443 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:50.080447 | orchestrator | 2026-02-04 02:10:50.080451 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-04 02:10:50.080454 | orchestrator | Wednesday 04 February 2026 02:10:43 +0000 (0:00:01.511) 0:03:14.539 **** 2026-02-04 02:10:50.080460 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:50.080465 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:50.080471 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:10:50.080477 | orchestrator | 2026-02-04 02:10:50.080480 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-04 02:10:50.080484 | orchestrator | Wednesday 04 February 2026 02:10:43 +0000 (0:00:00.386) 0:03:14.926 **** 2026-02-04 02:10:50.080488 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:10:50.080492 | orchestrator | 2026-02-04 02:10:50.080496 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-04 02:10:50.080499 | orchestrator | Wednesday 04 February 2026 02:10:45 +0000 (0:00:01.731) 0:03:16.657 **** 2026-02-04 02:10:50.080516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:10:50.080525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.080530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.080541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.080545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 02:10:50.080554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.166974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:50.167095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:50.167110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:10:50.167137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.167145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.167167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.167180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:50.167188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.167199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.167207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 02:10:50.167213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 02:10:50.167225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:10:50.307794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:50.307938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.307977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.307986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.307993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.308018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 02:10:50.308036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:50.308050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:50.308057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.308072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:50.308081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 02:10:50.308102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.529288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.529411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:50.529432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:50.529441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.529449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:50.529458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 02:10:50.529481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.529504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:50.529511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:50.529519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.529527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:50.529534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 02:10:50.529550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 02:10:51.743151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:51.743234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:51.743244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.743255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 02:10:51.743265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:51.743273 | orchestrator | 2026-02-04 02:10:51.743282 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-04 02:10:51.743308 | orchestrator | Wednesday 04 February 2026 02:10:50 +0000 (0:00:05.050) 0:03:21.707 **** 2026-02-04 02:10:51.743337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:10:51.743348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.743356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.743365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.743372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 02:10:51.743394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.832467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:51.832565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:51.832583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:10:51.832598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.832611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.832681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:51.832696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.832710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.832722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.832732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 02:10:51.832739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 02:10:51.832756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:51.832768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.918624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:10:51.918739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.918763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:51.918782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.918855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 02:10:51.918930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:51.918947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.918963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:51.918979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.919007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:51.919018 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:10:51.919031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:51.919049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-04 02:10:52.163485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:52.163654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:52.163696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 02:10:52.163712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:52.163735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:52.163761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:10:52.163781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:52.163827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:52.163849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 02:10:52.163929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:52.163950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:10:52.163963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 02:10:52.163975 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:10:52.163997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-04 02:11:03.128547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-04 02:11:03.128658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 02:11:03.128703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 02:11:03.128735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:11:03.128749 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:03.128764 | orchestrator | 2026-02-04 02:11:03.128776 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-04 02:11:03.128790 | orchestrator | Wednesday 04 February 2026 02:10:52 +0000 (0:00:01.632) 0:03:23.340 **** 2026-02-04 02:11:03.128802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 02:11:03.128815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 02:11:03.128827 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:03.128838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 02:11:03.128850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 02:11:03.128860 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:11:03.128975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-04 02:11:03.128991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-04 02:11:03.129012 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:03.129024 | orchestrator | 2026-02-04 02:11:03.129035 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-04 02:11:03.129047 | orchestrator | Wednesday 04 February 2026 02:10:54 +0000 (0:00:02.279) 0:03:25.620 **** 2026-02-04 02:11:03.129058 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:11:03.129069 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:11:03.129082 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:11:03.129096 | orchestrator | 2026-02-04 02:11:03.129109 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-04 02:11:03.129121 | orchestrator | Wednesday 04 February 2026 02:10:55 +0000 (0:00:01.348) 0:03:26.968 **** 2026-02-04 02:11:03.129134 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:11:03.129147 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:11:03.129161 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:11:03.129174 | orchestrator | 2026-02-04 02:11:03.129187 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-04 02:11:03.129200 | orchestrator | Wednesday 04 February 2026 02:10:58 +0000 (0:00:02.347) 0:03:29.316 **** 2026-02-04 02:11:03.129213 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:11:03.129225 | orchestrator | 2026-02-04 02:11:03.129238 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-04 02:11:03.129251 | orchestrator | Wednesday 04 February 2026 02:10:59 +0000 (0:00:01.350) 0:03:30.667 **** 2026-02-04 02:11:03.129267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:11:03.129290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:11:03.129305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:11:03.129336 | orchestrator | 2026-02-04 02:11:03.129350 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-04 02:11:03.129372 | orchestrator | Wednesday 04 February 2026 02:11:03 +0000 (0:00:03.637) 0:03:34.305 **** 2026-02-04 02:11:15.130762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:11:15.130845 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:15.130854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:11:15.130858 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:11:15.130875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:11:15.130879 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:15.130883 | orchestrator | 2026-02-04 02:11:15.130912 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-04 02:11:15.130920 | orchestrator | Wednesday 04 February 2026 02:11:03 +0000 (0:00:00.593) 0:03:34.898 **** 2026-02-04 02:11:15.130929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 02:11:15.130962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 02:11:15.130971 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:15.130977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 02:11:15.130982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 02:11:15.130987 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:11:15.131007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 02:11:15.131013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-04 02:11:15.131020 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:15.131026 | orchestrator | 2026-02-04 02:11:15.131032 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-04 02:11:15.131038 | orchestrator | Wednesday 04 February 2026 02:11:04 +0000 (0:00:00.838) 0:03:35.737 **** 2026-02-04 02:11:15.131044 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:11:15.131050 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:11:15.131056 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:11:15.131060 | orchestrator | 2026-02-04 02:11:15.131063 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-04 02:11:15.131067 | orchestrator | Wednesday 04 February 2026 02:11:06 +0000 (0:00:02.114) 0:03:37.851 **** 2026-02-04 02:11:15.131071 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:11:15.131075 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:11:15.131078 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:11:15.131082 | orchestrator | 2026-02-04 02:11:15.131086 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-04 02:11:15.131090 | orchestrator | Wednesday 04 February 2026 02:11:08 +0000 (0:00:01.997) 0:03:39.849 **** 2026-02-04 02:11:15.131094 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:11:15.131098 | orchestrator | 2026-02-04 02:11:15.131102 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-04 02:11:15.131105 | orchestrator | Wednesday 04 February 2026 02:11:10 +0000 (0:00:01.746) 0:03:41.595 **** 2026-02-04 02:11:15.131111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:11:15.131132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:11:15.131138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:11:15.131152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:11:16.226364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:11:16.226457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:11:16.226498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:11:16.226505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:11:16.226511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:11:16.226517 | orchestrator | 2026-02-04 02:11:16.226524 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-04 02:11:16.226530 | orchestrator | Wednesday 04 February 2026 02:11:15 +0000 (0:00:04.712) 0:03:46.308 **** 2026-02-04 02:11:16.226550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:11:16.226562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:11:16.226571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:11:16.226577 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:16.226584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:11:16.226594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:11:28.439491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:11:28.439599 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:11:28.439636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:11:28.439673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:11:28.439685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:11:28.439695 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:28.439705 | orchestrator | 2026-02-04 02:11:28.439716 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-04 02:11:28.439728 | orchestrator | Wednesday 04 February 2026 02:11:16 +0000 (0:00:01.098) 0:03:47.407 **** 2026-02-04 02:11:28.439739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439818 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:28.439828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439868 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:11:28.439878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-04 02:11:28.439985 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:28.439994 | orchestrator | 2026-02-04 02:11:28.440004 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-04 02:11:28.440014 | orchestrator | Wednesday 04 February 2026 02:11:17 +0000 (0:00:01.477) 0:03:48.884 **** 2026-02-04 02:11:28.440023 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:11:28.440033 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:11:28.440043 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:11:28.440053 | orchestrator | 2026-02-04 02:11:28.440062 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-04 02:11:28.440072 | orchestrator | Wednesday 04 February 2026 02:11:19 +0000 (0:00:01.394) 0:03:50.278 **** 2026-02-04 02:11:28.440081 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:11:28.440091 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:11:28.440101 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:11:28.440111 | orchestrator | 2026-02-04 02:11:28.440120 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-04 02:11:28.440130 | orchestrator | Wednesday 04 February 2026 02:11:21 +0000 (0:00:02.159) 0:03:52.437 **** 2026-02-04 02:11:28.440140 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:11:28.440151 | orchestrator | 2026-02-04 02:11:28.440160 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-04 02:11:28.440171 | orchestrator | Wednesday 04 February 2026 02:11:23 +0000 (0:00:01.867) 0:03:54.305 **** 2026-02-04 02:11:28.440181 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-04 02:11:28.440194 | orchestrator | 2026-02-04 02:11:28.440204 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-04 02:11:28.440213 | orchestrator | Wednesday 04 February 2026 02:11:24 +0000 (0:00:00.963) 0:03:55.269 **** 2026-02-04 02:11:28.440225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 02:11:28.440257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 02:11:41.429842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 02:11:41.429940 | orchestrator | 2026-02-04 02:11:41.429951 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-04 02:11:41.429958 | orchestrator | Wednesday 04 February 2026 02:11:28 +0000 (0:00:04.349) 0:03:59.618 **** 2026-02-04 02:11:41.429965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:11:41.429971 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:41.429991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:11:41.429997 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:11:41.430002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:11:41.430008 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:41.430044 | orchestrator | 2026-02-04 02:11:41.430051 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-04 02:11:41.430057 | orchestrator | Wednesday 04 February 2026 02:11:29 +0000 (0:00:01.527) 0:04:01.146 **** 2026-02-04 02:11:41.430064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 02:11:41.430073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 02:11:41.430094 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:41.430100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 02:11:41.430105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 02:11:41.430111 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:11:41.430116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 02:11:41.430121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 02:11:41.430141 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:41.430149 | orchestrator | 2026-02-04 02:11:41.430157 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 02:11:41.430165 | orchestrator | Wednesday 04 February 2026 02:11:31 +0000 (0:00:01.699) 0:04:02.845 **** 2026-02-04 02:11:41.430173 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:11:41.430181 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:11:41.430188 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:11:41.430196 | orchestrator | 2026-02-04 02:11:41.430205 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 02:11:41.430213 | orchestrator | Wednesday 04 February 2026 02:11:34 +0000 (0:00:02.539) 0:04:05.384 **** 2026-02-04 02:11:41.430221 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:11:41.430229 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:11:41.430237 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:11:41.430245 | orchestrator | 2026-02-04 02:11:41.430253 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-04 02:11:41.430262 | orchestrator | Wednesday 04 February 2026 02:11:37 +0000 (0:00:03.468) 0:04:08.852 **** 2026-02-04 02:11:41.430271 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-04 02:11:41.430281 | orchestrator | 2026-02-04 02:11:41.430289 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-04 02:11:41.430297 | orchestrator | Wednesday 04 February 2026 02:11:38 +0000 (0:00:01.170) 0:04:10.022 **** 2026-02-04 02:11:41.430309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:11:41.430315 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:41.430321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:11:41.430333 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:11:41.430338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:11:41.430344 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:11:41.430349 | orchestrator | 2026-02-04 02:11:41.430354 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-04 02:11:41.430360 | orchestrator | Wednesday 04 February 2026 02:11:39 +0000 (0:00:01.146) 0:04:11.169 **** 2026-02-04 02:11:41.430365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:11:41.430370 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:11:41.430376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:11:41.430386 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:07.172384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 02:12:07.172504 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:07.172523 | orchestrator | 2026-02-04 02:12:07.172533 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-04 02:12:07.172545 | orchestrator | Wednesday 04 February 2026 02:11:41 +0000 (0:00:01.439) 0:04:12.609 **** 2026-02-04 02:12:07.172556 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:07.172566 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:07.172576 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:07.172586 | orchestrator | 2026-02-04 02:12:07.172595 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 02:12:07.172605 | orchestrator | Wednesday 04 February 2026 02:11:43 +0000 (0:00:01.760) 0:04:14.369 **** 2026-02-04 02:12:07.172616 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:12:07.172628 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:12:07.172638 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:12:07.172648 | orchestrator | 2026-02-04 02:12:07.172658 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 02:12:07.172669 | orchestrator | Wednesday 04 February 2026 02:11:46 +0000 (0:00:02.900) 0:04:17.269 **** 2026-02-04 02:12:07.172705 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:12:07.172716 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:12:07.172727 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:12:07.172736 | orchestrator | 2026-02-04 02:12:07.172762 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-04 02:12:07.172774 | orchestrator | Wednesday 04 February 2026 02:11:49 +0000 (0:00:02.969) 0:04:20.239 **** 2026-02-04 02:12:07.172785 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-04 02:12:07.172796 | orchestrator | 2026-02-04 02:12:07.172805 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-04 02:12:07.172815 | orchestrator | Wednesday 04 February 2026 02:11:50 +0000 (0:00:01.324) 0:04:21.563 **** 2026-02-04 02:12:07.172826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 02:12:07.172837 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:07.172848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 02:12:07.172858 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:07.172868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 02:12:07.172879 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:07.172889 | orchestrator | 2026-02-04 02:12:07.172927 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-04 02:12:07.172940 | orchestrator | Wednesday 04 February 2026 02:11:51 +0000 (0:00:01.431) 0:04:22.995 **** 2026-02-04 02:12:07.172973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 02:12:07.172984 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:07.172995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 02:12:07.173018 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:07.173031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 02:12:07.173041 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:07.173052 | orchestrator | 2026-02-04 02:12:07.173068 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-04 02:12:07.173076 | orchestrator | Wednesday 04 February 2026 02:11:53 +0000 (0:00:01.463) 0:04:24.458 **** 2026-02-04 02:12:07.173084 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:07.173091 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:07.173098 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:07.173105 | orchestrator | 2026-02-04 02:12:07.173113 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 02:12:07.173120 | orchestrator | Wednesday 04 February 2026 02:11:55 +0000 (0:00:02.191) 0:04:26.649 **** 2026-02-04 02:12:07.173128 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:12:07.173135 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:12:07.173142 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:12:07.173150 | orchestrator | 2026-02-04 02:12:07.173157 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 02:12:07.173165 | orchestrator | Wednesday 04 February 2026 02:11:58 +0000 (0:00:02.669) 0:04:29.319 **** 2026-02-04 02:12:07.173172 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:12:07.173180 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:12:07.173187 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:12:07.173194 | orchestrator | 2026-02-04 02:12:07.173201 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-04 02:12:07.173209 | orchestrator | Wednesday 04 February 2026 02:12:01 +0000 (0:00:03.542) 0:04:32.862 **** 2026-02-04 02:12:07.173217 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:12:07.173224 | orchestrator | 2026-02-04 02:12:07.173231 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-04 02:12:07.173238 | orchestrator | Wednesday 04 February 2026 02:12:03 +0000 (0:00:01.623) 0:04:34.485 **** 2026-02-04 02:12:07.173247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:07.173255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 02:12:07.173276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 02:12:07.968351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 02:12:07.968453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:12:07.968473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:07.968487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 02:12:07.968501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 02:12:07.968540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 02:12:07.968571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:07.968585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:12:07.968593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 02:12:07.968602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 02:12:07.968638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 02:12:07.968653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:12:07.968661 | orchestrator | 2026-02-04 02:12:07.968671 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-04 02:12:07.968679 | orchestrator | Wednesday 04 February 2026 02:12:07 +0000 (0:00:04.020) 0:04:38.506 **** 2026-02-04 02:12:07.968693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 02:12:08.118668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 02:12:08.118762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 02:12:08.118778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 02:12:08.118789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:12:08.118821 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:08.118835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 02:12:08.118852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 02:12:08.118977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 02:12:08.119005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 02:12:08.119022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:12:08.119051 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:08.119070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 02:12:08.119087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 02:12:08.119104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 02:12:08.119143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 02:12:20.348002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 02:12:20.348085 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:20.348093 | orchestrator | 2026-02-04 02:12:20.348100 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-04 02:12:20.348106 | orchestrator | Wednesday 04 February 2026 02:12:08 +0000 (0:00:00.797) 0:04:39.303 **** 2026-02-04 02:12:20.348111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 02:12:20.348134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 02:12:20.348140 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:20.348145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 02:12:20.348150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 02:12:20.348154 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:20.348159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 02:12:20.348163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 02:12:20.348168 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:20.348172 | orchestrator | 2026-02-04 02:12:20.348176 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-04 02:12:20.348181 | orchestrator | Wednesday 04 February 2026 02:12:09 +0000 (0:00:00.980) 0:04:40.283 **** 2026-02-04 02:12:20.348185 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:12:20.348189 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:12:20.348194 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:12:20.348198 | orchestrator | 2026-02-04 02:12:20.348202 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-04 02:12:20.348207 | orchestrator | Wednesday 04 February 2026 02:12:10 +0000 (0:00:01.837) 0:04:42.121 **** 2026-02-04 02:12:20.348211 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:12:20.348215 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:12:20.348220 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:12:20.348224 | orchestrator | 2026-02-04 02:12:20.348229 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-04 02:12:20.348233 | orchestrator | Wednesday 04 February 2026 02:12:13 +0000 (0:00:02.256) 0:04:44.378 **** 2026-02-04 02:12:20.348238 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:12:20.348242 | orchestrator | 2026-02-04 02:12:20.348247 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-04 02:12:20.348251 | orchestrator | Wednesday 04 February 2026 02:12:14 +0000 (0:00:01.530) 0:04:45.909 **** 2026-02-04 02:12:20.348267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:12:20.348286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:12:20.348296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:12:20.348302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:12:20.348312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:12:20.348325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:12:22.608179 | orchestrator | 2026-02-04 02:12:22.608271 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-04 02:12:22.608289 | orchestrator | Wednesday 04 February 2026 02:12:20 +0000 (0:00:05.619) 0:04:51.528 **** 2026-02-04 02:12:22.608304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:12:22.608323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:12:22.608337 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:22.608361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:12:22.608371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:12:22.608414 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:22.608423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:12:22.608431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:12:22.608439 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:22.608447 | orchestrator | 2026-02-04 02:12:22.608454 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-04 02:12:22.608462 | orchestrator | Wednesday 04 February 2026 02:12:21 +0000 (0:00:01.180) 0:04:52.709 **** 2026-02-04 02:12:22.608495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 02:12:22.608505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 02:12:22.608516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 02:12:22.608531 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:22.608543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 02:12:22.608551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 02:12:22.608558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 02:12:22.608566 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:22.608573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-04 02:12:22.608581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 02:12:22.608600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-04 02:12:30.068921 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:30.069015 | orchestrator | 2026-02-04 02:12:30.069026 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-04 02:12:30.069034 | orchestrator | Wednesday 04 February 2026 02:12:22 +0000 (0:00:01.075) 0:04:53.785 **** 2026-02-04 02:12:30.069041 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:30.069048 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:30.069055 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:30.069061 | orchestrator | 2026-02-04 02:12:30.069067 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-04 02:12:30.069074 | orchestrator | Wednesday 04 February 2026 02:12:23 +0000 (0:00:00.542) 0:04:54.327 **** 2026-02-04 02:12:30.069080 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:30.069087 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:30.069093 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:30.069099 | orchestrator | 2026-02-04 02:12:30.069105 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-04 02:12:30.069112 | orchestrator | Wednesday 04 February 2026 02:12:25 +0000 (0:00:01.929) 0:04:56.257 **** 2026-02-04 02:12:30.069118 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:12:30.069125 | orchestrator | 2026-02-04 02:12:30.069131 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-04 02:12:30.069137 | orchestrator | Wednesday 04 February 2026 02:12:27 +0000 (0:00:02.351) 0:04:58.609 **** 2026-02-04 02:12:30.069146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 02:12:30.069177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 02:12:30.069197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:30.069204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:30.069225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 02:12:30.069233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 02:12:30.069241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 02:12:30.069247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:30.069260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:30.069266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 02:12:30.069277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 02:12:30.069284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 02:12:30.069296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:31.745642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:31.745768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 02:12:31.745817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 02:12:31.745849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 02:12:31.745860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 02:12:31.745892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:31.745942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 02:12:31.745971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:31.745996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:31.746091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 02:12:31.746116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:31.746136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 02:12:31.746171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 02:12:32.555729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 02:12:32.555829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.555861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.555873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 02:12:32.555884 | orchestrator | 2026-02-04 02:12:32.555950 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-04 02:12:32.555963 | orchestrator | Wednesday 04 February 2026 02:12:31 +0000 (0:00:04.474) 0:05:03.083 **** 2026-02-04 02:12:32.555975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 02:12:32.555987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 02:12:32.556039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.556057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.556075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 02:12:32.556103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 02:12:32.556123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 02:12:32.556150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 02:12:32.732222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 02:12:32.732305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.732330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.732338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.732346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.732355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 02:12:32.732362 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:32.732372 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 02:12:32.732414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 02:12:32.732424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 02:12:32.732435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.732443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:32.732451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 02:12:32.732464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 02:12:32.732471 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:32.732485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 02:12:34.897270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:34.897353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:34.897382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 02:12:34.897395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 02:12:34.897406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-04 02:12:34.897434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:34.897456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 02:12:34.897465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 02:12:34.897474 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:34.897484 | orchestrator | 2026-02-04 02:12:34.897494 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-04 02:12:34.897503 | orchestrator | Wednesday 04 February 2026 02:12:32 +0000 (0:00:00.974) 0:05:04.058 **** 2026-02-04 02:12:34.897528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 02:12:34.897540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 02:12:34.897551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 02:12:34.897562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 02:12:34.897572 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:34.897580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 02:12:34.897594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 02:12:34.897607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 02:12:34.897622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 02:12:34.897637 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:34.897652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-04 02:12:34.897664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-04 02:12:34.897678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 02:12:34.897700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-04 02:12:43.741432 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:43.741555 | orchestrator | 2026-02-04 02:12:43.741571 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-04 02:12:43.741583 | orchestrator | Wednesday 04 February 2026 02:12:34 +0000 (0:00:02.012) 0:05:06.070 **** 2026-02-04 02:12:43.741598 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:43.741611 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:43.741634 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:43.741649 | orchestrator | 2026-02-04 02:12:43.741664 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-04 02:12:43.741678 | orchestrator | Wednesday 04 February 2026 02:12:35 +0000 (0:00:00.603) 0:05:06.674 **** 2026-02-04 02:12:43.741692 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:43.741707 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:43.741721 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:43.741734 | orchestrator | 2026-02-04 02:12:43.741748 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-04 02:12:43.741761 | orchestrator | Wednesday 04 February 2026 02:12:37 +0000 (0:00:01.736) 0:05:08.410 **** 2026-02-04 02:12:43.741774 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:12:43.741786 | orchestrator | 2026-02-04 02:12:43.741799 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-04 02:12:43.741812 | orchestrator | Wednesday 04 February 2026 02:12:39 +0000 (0:00:02.154) 0:05:10.564 **** 2026-02-04 02:12:43.741830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:12:43.741885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:12:43.742097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:12:43.742133 | orchestrator | 2026-02-04 02:12:43.742153 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-04 02:12:43.742199 | orchestrator | Wednesday 04 February 2026 02:12:41 +0000 (0:00:02.308) 0:05:12.873 **** 2026-02-04 02:12:43.742216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 02:12:43.742257 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:43.742274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 02:12:43.742291 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:43.742307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 02:12:43.742322 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:43.742337 | orchestrator | 2026-02-04 02:12:43.742352 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-04 02:12:43.742365 | orchestrator | Wednesday 04 February 2026 02:12:42 +0000 (0:00:00.427) 0:05:13.301 **** 2026-02-04 02:12:43.742381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 02:12:43.742399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 02:12:43.742413 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:43.742429 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:43.742445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 02:12:43.742461 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:43.742476 | orchestrator | 2026-02-04 02:12:43.742491 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-04 02:12:43.742505 | orchestrator | Wednesday 04 February 2026 02:12:42 +0000 (0:00:00.707) 0:05:14.009 **** 2026-02-04 02:12:43.742534 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:54.229567 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:54.229677 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:54.229691 | orchestrator | 2026-02-04 02:12:54.229704 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-04 02:12:54.229718 | orchestrator | Wednesday 04 February 2026 02:12:43 +0000 (0:00:00.916) 0:05:14.925 **** 2026-02-04 02:12:54.229729 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:12:54.229764 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:12:54.229776 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:12:54.229787 | orchestrator | 2026-02-04 02:12:54.229798 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-04 02:12:54.229809 | orchestrator | Wednesday 04 February 2026 02:12:45 +0000 (0:00:01.420) 0:05:16.346 **** 2026-02-04 02:12:54.229820 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:12:54.229832 | orchestrator | 2026-02-04 02:12:54.229842 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-04 02:12:54.229853 | orchestrator | Wednesday 04 February 2026 02:12:46 +0000 (0:00:01.696) 0:05:18.042 **** 2026-02-04 02:12:54.229884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:54.229969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:54.229984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:54.230015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:54.230109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:54.230124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 02:12:54.230136 | orchestrator | 2026-02-04 02:12:54.230150 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-04 02:12:54.230164 | orchestrator | Wednesday 04 February 2026 02:12:53 +0000 (0:00:06.170) 0:05:24.212 **** 2026-02-04 02:12:54.230177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 02:12:54.230202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 02:13:00.460405 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:00.460494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 02:13:00.460504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 02:13:00.460511 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:00.460516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 02:13:00.460521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 02:13:00.460541 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:00.460546 | orchestrator | 2026-02-04 02:13:00.460552 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-04 02:13:00.460558 | orchestrator | Wednesday 04 February 2026 02:12:54 +0000 (0:00:01.197) 0:05:25.410 **** 2026-02-04 02:13:00.460574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460607 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:00.460611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460626 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:00.460630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-04 02:13:00.460649 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:00.460654 | orchestrator | 2026-02-04 02:13:00.460663 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-04 02:13:00.460667 | orchestrator | Wednesday 04 February 2026 02:12:55 +0000 (0:00:01.050) 0:05:26.461 **** 2026-02-04 02:13:00.460672 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:13:00.460677 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:13:00.460681 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:13:00.460686 | orchestrator | 2026-02-04 02:13:00.460691 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-04 02:13:00.460695 | orchestrator | Wednesday 04 February 2026 02:12:56 +0000 (0:00:01.316) 0:05:27.777 **** 2026-02-04 02:13:00.460700 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:13:00.460705 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:13:00.460709 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:13:00.460714 | orchestrator | 2026-02-04 02:13:00.460719 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-04 02:13:00.460723 | orchestrator | Wednesday 04 February 2026 02:12:58 +0000 (0:00:02.358) 0:05:30.135 **** 2026-02-04 02:13:00.460728 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:00.460732 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:00.460737 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:00.460742 | orchestrator | 2026-02-04 02:13:00.460746 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-04 02:13:00.460751 | orchestrator | Wednesday 04 February 2026 02:12:59 +0000 (0:00:00.762) 0:05:30.898 **** 2026-02-04 02:13:00.460755 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:00.460760 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:00.460765 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:00.460769 | orchestrator | 2026-02-04 02:13:00.460774 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-04 02:13:00.460778 | orchestrator | Wednesday 04 February 2026 02:13:00 +0000 (0:00:00.368) 0:05:31.267 **** 2026-02-04 02:13:00.460783 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:00.460790 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.854583 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.854717 | orchestrator | 2026-02-04 02:13:48.854740 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-04 02:13:48.854758 | orchestrator | Wednesday 04 February 2026 02:13:00 +0000 (0:00:00.378) 0:05:31.645 **** 2026-02-04 02:13:48.854776 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.854791 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.854806 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.854821 | orchestrator | 2026-02-04 02:13:48.854838 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-04 02:13:48.854853 | orchestrator | Wednesday 04 February 2026 02:13:00 +0000 (0:00:00.355) 0:05:32.001 **** 2026-02-04 02:13:48.854868 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.854885 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.854900 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.854963 | orchestrator | 2026-02-04 02:13:48.854979 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-04 02:13:48.855017 | orchestrator | Wednesday 04 February 2026 02:13:01 +0000 (0:00:00.718) 0:05:32.719 **** 2026-02-04 02:13:48.855036 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.855055 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.855074 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.855092 | orchestrator | 2026-02-04 02:13:48.855110 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-04 02:13:48.855129 | orchestrator | Wednesday 04 February 2026 02:13:02 +0000 (0:00:00.619) 0:05:33.339 **** 2026-02-04 02:13:48.855149 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.855170 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.855190 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.855209 | orchestrator | 2026-02-04 02:13:48.855230 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-04 02:13:48.855285 | orchestrator | Wednesday 04 February 2026 02:13:02 +0000 (0:00:00.713) 0:05:34.052 **** 2026-02-04 02:13:48.855308 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.855328 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.855348 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.855371 | orchestrator | 2026-02-04 02:13:48.855391 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-04 02:13:48.855411 | orchestrator | Wednesday 04 February 2026 02:13:03 +0000 (0:00:00.397) 0:05:34.450 **** 2026-02-04 02:13:48.855430 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.855449 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.855469 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.855489 | orchestrator | 2026-02-04 02:13:48.855509 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-04 02:13:48.855529 | orchestrator | Wednesday 04 February 2026 02:13:04 +0000 (0:00:01.315) 0:05:35.766 **** 2026-02-04 02:13:48.855548 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.855568 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.855588 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.855609 | orchestrator | 2026-02-04 02:13:48.855629 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-04 02:13:48.855649 | orchestrator | Wednesday 04 February 2026 02:13:05 +0000 (0:00:00.858) 0:05:36.625 **** 2026-02-04 02:13:48.855668 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.855688 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.855708 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.855728 | orchestrator | 2026-02-04 02:13:48.855747 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-04 02:13:48.855768 | orchestrator | Wednesday 04 February 2026 02:13:06 +0000 (0:00:00.884) 0:05:37.510 **** 2026-02-04 02:13:48.855788 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:13:48.855808 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:13:48.855826 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:13:48.855842 | orchestrator | 2026-02-04 02:13:48.855862 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-04 02:13:48.855882 | orchestrator | Wednesday 04 February 2026 02:13:15 +0000 (0:00:09.431) 0:05:46.941 **** 2026-02-04 02:13:48.855901 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.856027 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.856048 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.856067 | orchestrator | 2026-02-04 02:13:48.856087 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-04 02:13:48.856107 | orchestrator | Wednesday 04 February 2026 02:13:16 +0000 (0:00:01.190) 0:05:48.132 **** 2026-02-04 02:13:48.856125 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:13:48.856144 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:13:48.856164 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:13:48.856185 | orchestrator | 2026-02-04 02:13:48.856205 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-04 02:13:48.856225 | orchestrator | Wednesday 04 February 2026 02:13:32 +0000 (0:00:15.650) 0:06:03.783 **** 2026-02-04 02:13:48.856246 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.856266 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.856287 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.856305 | orchestrator | 2026-02-04 02:13:48.856326 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-04 02:13:48.856346 | orchestrator | Wednesday 04 February 2026 02:13:33 +0000 (0:00:00.792) 0:06:04.576 **** 2026-02-04 02:13:48.856367 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:13:48.856386 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:13:48.856404 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:13:48.856422 | orchestrator | 2026-02-04 02:13:48.856441 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-04 02:13:48.856461 | orchestrator | Wednesday 04 February 2026 02:13:43 +0000 (0:00:09.697) 0:06:14.274 **** 2026-02-04 02:13:48.856507 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.856529 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.856550 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.856570 | orchestrator | 2026-02-04 02:13:48.856590 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-04 02:13:48.856611 | orchestrator | Wednesday 04 February 2026 02:13:43 +0000 (0:00:00.776) 0:06:15.050 **** 2026-02-04 02:13:48.856630 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.856647 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.856666 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.856686 | orchestrator | 2026-02-04 02:13:48.856734 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-04 02:13:48.856755 | orchestrator | Wednesday 04 February 2026 02:13:44 +0000 (0:00:00.398) 0:06:15.449 **** 2026-02-04 02:13:48.856773 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.856790 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.856808 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.856825 | orchestrator | 2026-02-04 02:13:48.856844 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-04 02:13:48.856862 | orchestrator | Wednesday 04 February 2026 02:13:44 +0000 (0:00:00.359) 0:06:15.808 **** 2026-02-04 02:13:48.856881 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.856900 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.856950 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.856969 | orchestrator | 2026-02-04 02:13:48.856989 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-04 02:13:48.857008 | orchestrator | Wednesday 04 February 2026 02:13:44 +0000 (0:00:00.348) 0:06:16.157 **** 2026-02-04 02:13:48.857042 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.857075 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.857096 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.857117 | orchestrator | 2026-02-04 02:13:48.857137 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-04 02:13:48.857157 | orchestrator | Wednesday 04 February 2026 02:13:45 +0000 (0:00:00.760) 0:06:16.918 **** 2026-02-04 02:13:48.857177 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:13:48.857196 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:13:48.857216 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:13:48.857235 | orchestrator | 2026-02-04 02:13:48.857255 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-04 02:13:48.857274 | orchestrator | Wednesday 04 February 2026 02:13:46 +0000 (0:00:00.410) 0:06:17.329 **** 2026-02-04 02:13:48.857293 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.857311 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.857329 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.857346 | orchestrator | 2026-02-04 02:13:48.857364 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-04 02:13:48.857381 | orchestrator | Wednesday 04 February 2026 02:13:47 +0000 (0:00:00.958) 0:06:18.287 **** 2026-02-04 02:13:48.857399 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:13:48.857419 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:13:48.857436 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:13:48.857455 | orchestrator | 2026-02-04 02:13:48.857473 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:13:48.857493 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 02:13:48.857512 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 02:13:48.857532 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-04 02:13:48.857549 | orchestrator | 2026-02-04 02:13:48.857584 | orchestrator | 2026-02-04 02:13:48.857604 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:13:48.857622 | orchestrator | Wednesday 04 February 2026 02:13:47 +0000 (0:00:00.857) 0:06:19.144 **** 2026-02-04 02:13:48.857649 | orchestrator | =============================================================================== 2026-02-04 02:13:48.857669 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.65s 2026-02-04 02:13:48.857687 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.70s 2026-02-04 02:13:48.857704 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.43s 2026-02-04 02:13:48.857721 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.17s 2026-02-04 02:13:48.857738 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.62s 2026-02-04 02:13:48.857753 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.35s 2026-02-04 02:13:48.857768 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.05s 2026-02-04 02:13:48.857783 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.72s 2026-02-04 02:13:48.857801 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.71s 2026-02-04 02:13:48.857819 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.47s 2026-02-04 02:13:48.857839 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.35s 2026-02-04 02:13:48.857856 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.02s 2026-02-04 02:13:48.857876 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.88s 2026-02-04 02:13:48.857894 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.80s 2026-02-04 02:13:48.857985 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.68s 2026-02-04 02:13:48.858005 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.64s 2026-02-04 02:13:48.858094 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.64s 2026-02-04 02:13:48.858116 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.54s 2026-02-04 02:13:48.858132 | orchestrator | proxysql-config : Copying over nova-cell ProxySQL rules config ---------- 3.47s 2026-02-04 02:13:48.858150 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.46s 2026-02-04 02:13:51.552247 | orchestrator | 2026-02-04 02:13:51 | INFO  | Task 14036988-6d70-4b08-b4d7-7bb4aa4ad537 (opensearch) was prepared for execution. 2026-02-04 02:13:51.552386 | orchestrator | 2026-02-04 02:13:51 | INFO  | It takes a moment until task 14036988-6d70-4b08-b4d7-7bb4aa4ad537 (opensearch) has been started and output is visible here. 2026-02-04 02:14:03.408613 | orchestrator | 2026-02-04 02:14:03.408723 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:14:03.408739 | orchestrator | 2026-02-04 02:14:03.408754 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:14:03.408776 | orchestrator | Wednesday 04 February 2026 02:13:56 +0000 (0:00:00.295) 0:00:00.295 **** 2026-02-04 02:14:03.408792 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:14:03.408808 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:14:03.408824 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:14:03.408840 | orchestrator | 2026-02-04 02:14:03.408855 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:14:03.408868 | orchestrator | Wednesday 04 February 2026 02:13:56 +0000 (0:00:00.340) 0:00:00.636 **** 2026-02-04 02:14:03.408893 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-04 02:14:03.408903 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-04 02:14:03.408969 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-04 02:14:03.408979 | orchestrator | 2026-02-04 02:14:03.408989 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-04 02:14:03.409080 | orchestrator | 2026-02-04 02:14:03.409098 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 02:14:03.409112 | orchestrator | Wednesday 04 February 2026 02:13:57 +0000 (0:00:00.531) 0:00:01.168 **** 2026-02-04 02:14:03.409128 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:14:03.409144 | orchestrator | 2026-02-04 02:14:03.409161 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-04 02:14:03.409179 | orchestrator | Wednesday 04 February 2026 02:13:58 +0000 (0:00:00.534) 0:00:01.703 **** 2026-02-04 02:14:03.409196 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 02:14:03.409212 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 02:14:03.409224 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 02:14:03.409235 | orchestrator | 2026-02-04 02:14:03.409245 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-04 02:14:03.409255 | orchestrator | Wednesday 04 February 2026 02:13:58 +0000 (0:00:00.651) 0:00:02.354 **** 2026-02-04 02:14:03.409269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:03.409284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:03.409315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:03.409341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:03.409376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:03.409395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:03.409413 | orchestrator | 2026-02-04 02:14:03.409428 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 02:14:03.409443 | orchestrator | Wednesday 04 February 2026 02:14:00 +0000 (0:00:01.695) 0:00:04.049 **** 2026-02-04 02:14:03.409458 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:14:03.409473 | orchestrator | 2026-02-04 02:14:03.409488 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-04 02:14:03.409504 | orchestrator | Wednesday 04 February 2026 02:14:00 +0000 (0:00:00.551) 0:00:04.601 **** 2026-02-04 02:14:03.409543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:04.317498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:04.317589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:04.317599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:04.317606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:04.317656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:04.317664 | orchestrator | 2026-02-04 02:14:04.317671 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-04 02:14:04.317677 | orchestrator | Wednesday 04 February 2026 02:14:03 +0000 (0:00:02.452) 0:00:07.053 **** 2026-02-04 02:14:04.317684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:14:04.317690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:14:04.317696 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:14:04.317703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:14:04.317722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:14:05.508831 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:14:05.508907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:14:05.508944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:14:05.508974 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:14:05.509099 | orchestrator | 2026-02-04 02:14:05.509109 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-04 02:14:05.509140 | orchestrator | Wednesday 04 February 2026 02:14:04 +0000 (0:00:00.912) 0:00:07.965 **** 2026-02-04 02:14:05.509165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:14:05.509202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:14:05.509224 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:14:05.509230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:14:05.509235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:14:05.509240 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:14:05.509250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-04 02:14:05.509259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-04 02:14:05.509264 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:14:05.509269 | orchestrator | 2026-02-04 02:14:05.509274 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-04 02:14:05.509282 | orchestrator | Wednesday 04 February 2026 02:14:05 +0000 (0:00:01.184) 0:00:09.150 **** 2026-02-04 02:14:13.702563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:13.702666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:13.702683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:13.702735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:13.702769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:13.702783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:14:13.702805 | orchestrator | 2026-02-04 02:14:13.702818 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-04 02:14:13.702830 | orchestrator | Wednesday 04 February 2026 02:14:07 +0000 (0:00:02.202) 0:00:11.352 **** 2026-02-04 02:14:13.702841 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:14:13.702853 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:14:13.702863 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:14:13.702874 | orchestrator | 2026-02-04 02:14:13.702884 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-04 02:14:13.702895 | orchestrator | Wednesday 04 February 2026 02:14:10 +0000 (0:00:02.402) 0:00:13.754 **** 2026-02-04 02:14:13.702906 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:14:13.702953 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:14:13.702964 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:14:13.702974 | orchestrator | 2026-02-04 02:14:13.702984 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-04 02:14:13.702993 | orchestrator | Wednesday 04 February 2026 02:14:12 +0000 (0:00:01.940) 0:00:15.695 **** 2026-02-04 02:14:13.703004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:13.703021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:14:13.703040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-04 02:16:47.132634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:16:47.132775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:16:47.132808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-04 02:16:47.132821 | orchestrator | 2026-02-04 02:16:47.132834 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 02:16:47.132845 | orchestrator | Wednesday 04 February 2026 02:14:13 +0000 (0:00:01.652) 0:00:17.347 **** 2026-02-04 02:16:47.132855 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:16:47.132867 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:16:47.132876 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:16:47.132886 | orchestrator | 2026-02-04 02:16:47.132897 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 02:16:47.132907 | orchestrator | Wednesday 04 February 2026 02:14:13 +0000 (0:00:00.299) 0:00:17.647 **** 2026-02-04 02:16:47.132917 | orchestrator | 2026-02-04 02:16:47.132926 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 02:16:47.132936 | orchestrator | Wednesday 04 February 2026 02:14:14 +0000 (0:00:00.070) 0:00:17.717 **** 2026-02-04 02:16:47.132945 | orchestrator | 2026-02-04 02:16:47.132955 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 02:16:47.132973 | orchestrator | Wednesday 04 February 2026 02:14:14 +0000 (0:00:00.067) 0:00:17.785 **** 2026-02-04 02:16:47.132983 | orchestrator | 2026-02-04 02:16:47.132992 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-04 02:16:47.133112 | orchestrator | Wednesday 04 February 2026 02:14:14 +0000 (0:00:00.076) 0:00:17.862 **** 2026-02-04 02:16:47.133133 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:16:47.133145 | orchestrator | 2026-02-04 02:16:47.133157 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-04 02:16:47.133169 | orchestrator | Wednesday 04 February 2026 02:14:14 +0000 (0:00:00.206) 0:00:18.069 **** 2026-02-04 02:16:47.133179 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:16:47.133191 | orchestrator | 2026-02-04 02:16:47.133202 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-04 02:16:47.133213 | orchestrator | Wednesday 04 February 2026 02:14:15 +0000 (0:00:00.734) 0:00:18.804 **** 2026-02-04 02:16:47.133224 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:16:47.133235 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:16:47.133246 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:16:47.133257 | orchestrator | 2026-02-04 02:16:47.133269 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-04 02:16:47.133280 | orchestrator | Wednesday 04 February 2026 02:15:21 +0000 (0:01:06.346) 0:01:25.151 **** 2026-02-04 02:16:47.133290 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:16:47.133300 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:16:47.133309 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:16:47.133321 | orchestrator | 2026-02-04 02:16:47.133338 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 02:16:47.133355 | orchestrator | Wednesday 04 February 2026 02:16:36 +0000 (0:01:14.909) 0:02:40.060 **** 2026-02-04 02:16:47.133371 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:16:47.133387 | orchestrator | 2026-02-04 02:16:47.133403 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-04 02:16:47.133418 | orchestrator | Wednesday 04 February 2026 02:16:37 +0000 (0:00:00.620) 0:02:40.681 **** 2026-02-04 02:16:47.133434 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:16:47.133450 | orchestrator | 2026-02-04 02:16:47.133466 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-04 02:16:47.133484 | orchestrator | Wednesday 04 February 2026 02:16:39 +0000 (0:00:02.535) 0:02:43.216 **** 2026-02-04 02:16:47.133501 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:16:47.133517 | orchestrator | 2026-02-04 02:16:47.133533 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-04 02:16:47.133549 | orchestrator | Wednesday 04 February 2026 02:16:41 +0000 (0:00:02.317) 0:02:45.534 **** 2026-02-04 02:16:47.133565 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:16:47.133582 | orchestrator | 2026-02-04 02:16:47.133597 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-04 02:16:47.133612 | orchestrator | Wednesday 04 February 2026 02:16:44 +0000 (0:00:02.651) 0:02:48.186 **** 2026-02-04 02:16:47.133629 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:16:47.133647 | orchestrator | 2026-02-04 02:16:47.133663 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:16:47.133681 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 02:16:47.133699 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 02:16:47.133724 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 02:16:47.133735 | orchestrator | 2026-02-04 02:16:47.133745 | orchestrator | 2026-02-04 02:16:47.133764 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:16:47.133774 | orchestrator | Wednesday 04 February 2026 02:16:47 +0000 (0:00:02.578) 0:02:50.764 **** 2026-02-04 02:16:47.133783 | orchestrator | =============================================================================== 2026-02-04 02:16:47.133793 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 74.91s 2026-02-04 02:16:47.133802 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.35s 2026-02-04 02:16:47.133812 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.65s 2026-02-04 02:16:47.133821 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.58s 2026-02-04 02:16:47.133831 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.54s 2026-02-04 02:16:47.133841 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.45s 2026-02-04 02:16:47.133850 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.40s 2026-02-04 02:16:47.133859 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.32s 2026-02-04 02:16:47.133869 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.20s 2026-02-04 02:16:47.133879 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.94s 2026-02-04 02:16:47.133888 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.70s 2026-02-04 02:16:47.133898 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.65s 2026-02-04 02:16:47.133908 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.18s 2026-02-04 02:16:47.133917 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.91s 2026-02-04 02:16:47.133927 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.74s 2026-02-04 02:16:47.133936 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.65s 2026-02-04 02:16:47.133956 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.62s 2026-02-04 02:16:47.535123 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-02-04 02:16:47.535214 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2026-02-04 02:16:47.535231 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-02-04 02:16:50.193926 | orchestrator | 2026-02-04 02:16:50 | INFO  | Task 44388dc0-0cec-4532-8a14-e645ae234a67 (memcached) was prepared for execution. 2026-02-04 02:16:50.194255 | orchestrator | 2026-02-04 02:16:50 | INFO  | It takes a moment until task 44388dc0-0cec-4532-8a14-e645ae234a67 (memcached) has been started and output is visible here. 2026-02-04 02:17:02.818334 | orchestrator | 2026-02-04 02:17:02.818484 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:17:02.818512 | orchestrator | 2026-02-04 02:17:02.818532 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:17:02.819536 | orchestrator | Wednesday 04 February 2026 02:16:54 +0000 (0:00:00.288) 0:00:00.288 **** 2026-02-04 02:17:02.819602 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:17:02.819626 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:17:02.819649 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:17:02.819671 | orchestrator | 2026-02-04 02:17:02.819692 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:17:02.819712 | orchestrator | Wednesday 04 February 2026 02:16:55 +0000 (0:00:00.336) 0:00:00.624 **** 2026-02-04 02:17:02.819733 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-04 02:17:02.819753 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-04 02:17:02.819771 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-04 02:17:02.819790 | orchestrator | 2026-02-04 02:17:02.819810 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-04 02:17:02.819865 | orchestrator | 2026-02-04 02:17:02.819887 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-04 02:17:02.819906 | orchestrator | Wednesday 04 February 2026 02:16:55 +0000 (0:00:00.507) 0:00:01.131 **** 2026-02-04 02:17:02.819926 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:17:02.819948 | orchestrator | 2026-02-04 02:17:02.819967 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-04 02:17:02.819987 | orchestrator | Wednesday 04 February 2026 02:16:56 +0000 (0:00:00.544) 0:00:01.676 **** 2026-02-04 02:17:02.820009 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-04 02:17:02.820110 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-04 02:17:02.820132 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-04 02:17:02.820152 | orchestrator | 2026-02-04 02:17:02.820172 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-04 02:17:02.820192 | orchestrator | Wednesday 04 February 2026 02:16:56 +0000 (0:00:00.641) 0:00:02.317 **** 2026-02-04 02:17:02.820213 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-04 02:17:02.820233 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-04 02:17:02.820252 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-04 02:17:02.820272 | orchestrator | 2026-02-04 02:17:02.820292 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-04 02:17:02.820312 | orchestrator | Wednesday 04 February 2026 02:16:58 +0000 (0:00:01.891) 0:00:04.209 **** 2026-02-04 02:17:02.820353 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:17:02.820376 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:17:02.820396 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:17:02.820416 | orchestrator | 2026-02-04 02:17:02.820437 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-04 02:17:02.820457 | orchestrator | Wednesday 04 February 2026 02:17:00 +0000 (0:00:01.528) 0:00:05.737 **** 2026-02-04 02:17:02.820478 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:17:02.820498 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:17:02.820518 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:17:02.820538 | orchestrator | 2026-02-04 02:17:02.820558 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:17:02.820579 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:17:02.820600 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:17:02.820620 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:17:02.820639 | orchestrator | 2026-02-04 02:17:02.820659 | orchestrator | 2026-02-04 02:17:02.820679 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:17:02.820699 | orchestrator | Wednesday 04 February 2026 02:17:02 +0000 (0:00:02.051) 0:00:07.789 **** 2026-02-04 02:17:02.820720 | orchestrator | =============================================================================== 2026-02-04 02:17:02.820740 | orchestrator | memcached : Restart memcached container --------------------------------- 2.05s 2026-02-04 02:17:02.820760 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.89s 2026-02-04 02:17:02.820779 | orchestrator | memcached : Check memcached container ----------------------------------- 1.53s 2026-02-04 02:17:02.820798 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.64s 2026-02-04 02:17:02.820817 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.54s 2026-02-04 02:17:02.820836 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-02-04 02:17:02.820856 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-04 02:17:05.510149 | orchestrator | 2026-02-04 02:17:05 | INFO  | Task 85ae992e-13ab-4011-955d-b3315f1a6abd (redis) was prepared for execution. 2026-02-04 02:17:05.510267 | orchestrator | 2026-02-04 02:17:05 | INFO  | It takes a moment until task 85ae992e-13ab-4011-955d-b3315f1a6abd (redis) has been started and output is visible here. 2026-02-04 02:17:14.916875 | orchestrator | 2026-02-04 02:17:14.917006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:17:14.917053 | orchestrator | 2026-02-04 02:17:14.917064 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:17:14.917074 | orchestrator | Wednesday 04 February 2026 02:17:10 +0000 (0:00:00.272) 0:00:00.272 **** 2026-02-04 02:17:14.917083 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:17:14.917103 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:17:14.917112 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:17:14.917121 | orchestrator | 2026-02-04 02:17:14.917130 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:17:14.917139 | orchestrator | Wednesday 04 February 2026 02:17:10 +0000 (0:00:00.316) 0:00:00.588 **** 2026-02-04 02:17:14.917148 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-04 02:17:14.917157 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-04 02:17:14.917165 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-04 02:17:14.917177 | orchestrator | 2026-02-04 02:17:14.917192 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-04 02:17:14.917206 | orchestrator | 2026-02-04 02:17:14.917221 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-04 02:17:14.917235 | orchestrator | Wednesday 04 February 2026 02:17:10 +0000 (0:00:00.452) 0:00:01.041 **** 2026-02-04 02:17:14.917247 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:17:14.917263 | orchestrator | 2026-02-04 02:17:14.917278 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-04 02:17:14.917294 | orchestrator | Wednesday 04 February 2026 02:17:11 +0000 (0:00:00.541) 0:00:01.582 **** 2026-02-04 02:17:14.917315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917439 | orchestrator | 2026-02-04 02:17:14.917449 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-04 02:17:14.917459 | orchestrator | Wednesday 04 February 2026 02:17:12 +0000 (0:00:01.080) 0:00:02.663 **** 2026-02-04 02:17:14.917469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:14.917644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200546 | orchestrator | 2026-02-04 02:17:19.200570 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-04 02:17:19.200586 | orchestrator | Wednesday 04 February 2026 02:17:14 +0000 (0:00:02.437) 0:00:05.100 **** 2026-02-04 02:17:19.200602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200674 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200717 | orchestrator | 2026-02-04 02:17:19.200726 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-04 02:17:19.200734 | orchestrator | Wednesday 04 February 2026 02:17:17 +0000 (0:00:02.523) 0:00:07.623 **** 2026-02-04 02:17:19.200742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:19.200801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 02:17:25.941165 | orchestrator | 2026-02-04 02:17:25.941282 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 02:17:25.941299 | orchestrator | Wednesday 04 February 2026 02:17:18 +0000 (0:00:01.514) 0:00:09.138 **** 2026-02-04 02:17:25.941312 | orchestrator | 2026-02-04 02:17:25.941323 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 02:17:25.941334 | orchestrator | Wednesday 04 February 2026 02:17:19 +0000 (0:00:00.070) 0:00:09.209 **** 2026-02-04 02:17:25.941345 | orchestrator | 2026-02-04 02:17:25.941357 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 02:17:25.941387 | orchestrator | Wednesday 04 February 2026 02:17:19 +0000 (0:00:00.073) 0:00:09.282 **** 2026-02-04 02:17:25.941409 | orchestrator | 2026-02-04 02:17:25.941421 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-04 02:17:25.941432 | orchestrator | Wednesday 04 February 2026 02:17:19 +0000 (0:00:00.094) 0:00:09.376 **** 2026-02-04 02:17:25.941443 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:17:25.941456 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:17:25.941467 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:17:25.941478 | orchestrator | 2026-02-04 02:17:25.941489 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-04 02:17:25.941501 | orchestrator | Wednesday 04 February 2026 02:17:22 +0000 (0:00:03.067) 0:00:12.444 **** 2026-02-04 02:17:25.941540 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:17:25.941553 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:17:25.941564 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:17:25.941575 | orchestrator | 2026-02-04 02:17:25.941587 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:17:25.941598 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:17:25.941610 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:17:25.941636 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:17:25.941648 | orchestrator | 2026-02-04 02:17:25.941659 | orchestrator | 2026-02-04 02:17:25.941669 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:17:25.941681 | orchestrator | Wednesday 04 February 2026 02:17:25 +0000 (0:00:03.284) 0:00:15.728 **** 2026-02-04 02:17:25.941691 | orchestrator | =============================================================================== 2026-02-04 02:17:25.941702 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 3.28s 2026-02-04 02:17:25.941713 | orchestrator | redis : Restart redis container ----------------------------------------- 3.07s 2026-02-04 02:17:25.941724 | orchestrator | redis : Copying over redis config files --------------------------------- 2.52s 2026-02-04 02:17:25.941734 | orchestrator | redis : Copying over default config.json files -------------------------- 2.44s 2026-02-04 02:17:25.941745 | orchestrator | redis : Check redis containers ------------------------------------------ 1.51s 2026-02-04 02:17:25.941756 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.08s 2026-02-04 02:17:25.941766 | orchestrator | redis : include_tasks --------------------------------------------------- 0.54s 2026-02-04 02:17:25.941777 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2026-02-04 02:17:25.941788 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-04 02:17:25.941798 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.24s 2026-02-04 02:17:28.739691 | orchestrator | 2026-02-04 02:17:28 | INFO  | Task 1b24cacf-1c9c-4b5f-b73b-1b8f494fea9b (mariadb) was prepared for execution. 2026-02-04 02:17:28.739814 | orchestrator | 2026-02-04 02:17:28 | INFO  | It takes a moment until task 1b24cacf-1c9c-4b5f-b73b-1b8f494fea9b (mariadb) has been started and output is visible here. 2026-02-04 02:17:43.397200 | orchestrator | 2026-02-04 02:17:43.397313 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:17:43.397331 | orchestrator | 2026-02-04 02:17:43.397343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:17:43.397355 | orchestrator | Wednesday 04 February 2026 02:17:33 +0000 (0:00:00.184) 0:00:00.184 **** 2026-02-04 02:17:43.397367 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:17:43.397379 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:17:43.397391 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:17:43.397402 | orchestrator | 2026-02-04 02:17:43.397413 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:17:43.397425 | orchestrator | Wednesday 04 February 2026 02:17:33 +0000 (0:00:00.339) 0:00:00.523 **** 2026-02-04 02:17:43.397437 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-04 02:17:43.397449 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-04 02:17:43.397460 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-04 02:17:43.397471 | orchestrator | 2026-02-04 02:17:43.397482 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-04 02:17:43.397493 | orchestrator | 2026-02-04 02:17:43.397504 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-04 02:17:43.397536 | orchestrator | Wednesday 04 February 2026 02:17:34 +0000 (0:00:00.580) 0:00:01.103 **** 2026-02-04 02:17:43.397548 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 02:17:43.397559 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 02:17:43.397570 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 02:17:43.397581 | orchestrator | 2026-02-04 02:17:43.397592 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 02:17:43.397603 | orchestrator | Wednesday 04 February 2026 02:17:34 +0000 (0:00:00.409) 0:00:01.513 **** 2026-02-04 02:17:43.397615 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:17:43.397627 | orchestrator | 2026-02-04 02:17:43.397638 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-04 02:17:43.397649 | orchestrator | Wednesday 04 February 2026 02:17:35 +0000 (0:00:00.548) 0:00:02.061 **** 2026-02-04 02:17:43.397675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:17:43.397712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:17:43.397739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:17:43.397753 | orchestrator | 2026-02-04 02:17:43.397765 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-04 02:17:43.397777 | orchestrator | Wednesday 04 February 2026 02:17:38 +0000 (0:00:02.767) 0:00:04.829 **** 2026-02-04 02:17:43.397788 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:17:43.397800 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:17:43.397811 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:17:43.397822 | orchestrator | 2026-02-04 02:17:43.397833 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-04 02:17:43.397844 | orchestrator | Wednesday 04 February 2026 02:17:38 +0000 (0:00:00.675) 0:00:05.504 **** 2026-02-04 02:17:43.397855 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:17:43.397866 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:17:43.397877 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:17:43.397887 | orchestrator | 2026-02-04 02:17:43.397898 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-04 02:17:43.397909 | orchestrator | Wednesday 04 February 2026 02:17:40 +0000 (0:00:01.474) 0:00:06.978 **** 2026-02-04 02:17:43.397931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:17:51.493542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:17:51.493689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:17:51.493752 | orchestrator | 2026-02-04 02:17:51.493778 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-04 02:17:51.493797 | orchestrator | Wednesday 04 February 2026 02:17:43 +0000 (0:00:03.208) 0:00:10.187 **** 2026-02-04 02:17:51.493809 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:17:51.493822 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:17:51.493833 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:17:51.493844 | orchestrator | 2026-02-04 02:17:51.493855 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-04 02:17:51.493887 | orchestrator | Wednesday 04 February 2026 02:17:44 +0000 (0:00:01.108) 0:00:11.295 **** 2026-02-04 02:17:51.493899 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:17:51.493910 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:17:51.493921 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:17:51.493932 | orchestrator | 2026-02-04 02:17:51.493943 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 02:17:51.493954 | orchestrator | Wednesday 04 February 2026 02:17:48 +0000 (0:00:04.018) 0:00:15.314 **** 2026-02-04 02:17:51.493966 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:17:51.493977 | orchestrator | 2026-02-04 02:17:51.493988 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-04 02:17:51.493999 | orchestrator | Wednesday 04 February 2026 02:17:49 +0000 (0:00:00.553) 0:00:15.867 **** 2026-02-04 02:17:51.494125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:51.494166 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:17:51.494190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:56.612585 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:17:56.612741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:56.612792 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:17:56.612806 | orchestrator | 2026-02-04 02:17:56.612818 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-04 02:17:56.612830 | orchestrator | Wednesday 04 February 2026 02:17:51 +0000 (0:00:02.416) 0:00:18.283 **** 2026-02-04 02:17:56.612844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:56.612856 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:17:56.612895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:56.612918 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:17:56.612930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:56.612942 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:17:56.612954 | orchestrator | 2026-02-04 02:17:56.612965 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-04 02:17:56.612976 | orchestrator | Wednesday 04 February 2026 02:17:54 +0000 (0:00:02.764) 0:00:21.047 **** 2026-02-04 02:17:56.613002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:59.711488 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:17:59.711622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:59.711655 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:17:59.711697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 02:17:59.711748 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:17:59.711768 | orchestrator | 2026-02-04 02:17:59.711788 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-04 02:17:59.711806 | orchestrator | Wednesday 04 February 2026 02:17:56 +0000 (0:00:02.358) 0:00:23.406 **** 2026-02-04 02:17:59.711853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:17:59.711874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:17:59.711916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 02:20:20.164658 | orchestrator | 2026-02-04 02:20:20.164780 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-04 02:20:20.164800 | orchestrator | Wednesday 04 February 2026 02:17:59 +0000 (0:00:03.095) 0:00:26.502 **** 2026-02-04 02:20:20.164811 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:20.164825 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:20:20.164836 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:20:20.164848 | orchestrator | 2026-02-04 02:20:20.164858 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-04 02:20:20.164870 | orchestrator | Wednesday 04 February 2026 02:18:00 +0000 (0:00:00.809) 0:00:27.311 **** 2026-02-04 02:20:20.164881 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.164894 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:20:20.164906 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:20:20.164917 | orchestrator | 2026-02-04 02:20:20.164929 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-04 02:20:20.164940 | orchestrator | Wednesday 04 February 2026 02:18:01 +0000 (0:00:00.598) 0:00:27.910 **** 2026-02-04 02:20:20.164952 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.164962 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:20:20.164974 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:20:20.164986 | orchestrator | 2026-02-04 02:20:20.164997 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-04 02:20:20.165008 | orchestrator | Wednesday 04 February 2026 02:18:01 +0000 (0:00:00.331) 0:00:28.241 **** 2026-02-04 02:20:20.165020 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-04 02:20:20.165034 | orchestrator | ...ignoring 2026-02-04 02:20:20.165046 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-04 02:20:20.165057 | orchestrator | ...ignoring 2026-02-04 02:20:20.165068 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-04 02:20:20.165079 | orchestrator | ...ignoring 2026-02-04 02:20:20.165155 | orchestrator | 2026-02-04 02:20:20.165170 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-04 02:20:20.165182 | orchestrator | Wednesday 04 February 2026 02:18:12 +0000 (0:00:10.869) 0:00:39.111 **** 2026-02-04 02:20:20.165193 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.165204 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:20:20.165216 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:20:20.165226 | orchestrator | 2026-02-04 02:20:20.165238 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-04 02:20:20.165251 | orchestrator | Wednesday 04 February 2026 02:18:12 +0000 (0:00:00.433) 0:00:39.544 **** 2026-02-04 02:20:20.165263 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:20.165276 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:20.165287 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:20.165299 | orchestrator | 2026-02-04 02:20:20.165311 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-04 02:20:20.165324 | orchestrator | Wednesday 04 February 2026 02:18:13 +0000 (0:00:00.699) 0:00:40.244 **** 2026-02-04 02:20:20.165335 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:20.165347 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:20.165358 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:20.165369 | orchestrator | 2026-02-04 02:20:20.165398 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-04 02:20:20.165412 | orchestrator | Wednesday 04 February 2026 02:18:13 +0000 (0:00:00.471) 0:00:40.716 **** 2026-02-04 02:20:20.165424 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:20.165436 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:20.165447 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:20.165459 | orchestrator | 2026-02-04 02:20:20.165471 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-04 02:20:20.165484 | orchestrator | Wednesday 04 February 2026 02:18:14 +0000 (0:00:00.505) 0:00:41.221 **** 2026-02-04 02:20:20.165496 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.165507 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:20:20.165519 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:20:20.165530 | orchestrator | 2026-02-04 02:20:20.165542 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-04 02:20:20.165555 | orchestrator | Wednesday 04 February 2026 02:18:14 +0000 (0:00:00.439) 0:00:41.661 **** 2026-02-04 02:20:20.165566 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:20.165578 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:20.165589 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:20.165601 | orchestrator | 2026-02-04 02:20:20.165613 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 02:20:20.165625 | orchestrator | Wednesday 04 February 2026 02:18:15 +0000 (0:00:00.690) 0:00:42.351 **** 2026-02-04 02:20:20.165636 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:20.165647 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:20.165659 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-04 02:20:20.165671 | orchestrator | 2026-02-04 02:20:20.165682 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-04 02:20:20.165693 | orchestrator | Wednesday 04 February 2026 02:18:15 +0000 (0:00:00.403) 0:00:42.755 **** 2026-02-04 02:20:20.165706 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:20.165717 | orchestrator | 2026-02-04 02:20:20.165728 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-04 02:20:20.165740 | orchestrator | Wednesday 04 February 2026 02:18:26 +0000 (0:00:10.197) 0:00:52.952 **** 2026-02-04 02:20:20.165752 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.165764 | orchestrator | 2026-02-04 02:20:20.165776 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 02:20:20.165788 | orchestrator | Wednesday 04 February 2026 02:18:26 +0000 (0:00:00.158) 0:00:53.111 **** 2026-02-04 02:20:20.165800 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:20.165851 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:20.165866 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:20.165876 | orchestrator | 2026-02-04 02:20:20.165888 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-04 02:20:20.165899 | orchestrator | Wednesday 04 February 2026 02:18:27 +0000 (0:00:01.113) 0:00:54.225 **** 2026-02-04 02:20:20.165911 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:20.165923 | orchestrator | 2026-02-04 02:20:20.165934 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-04 02:20:20.165943 | orchestrator | Wednesday 04 February 2026 02:18:35 +0000 (0:00:08.327) 0:01:02.553 **** 2026-02-04 02:20:20.165953 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.165962 | orchestrator | 2026-02-04 02:20:20.165972 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-04 02:20:20.165983 | orchestrator | Wednesday 04 February 2026 02:18:37 +0000 (0:00:01.554) 0:01:04.107 **** 2026-02-04 02:20:20.165994 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.166006 | orchestrator | 2026-02-04 02:20:20.166085 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-04 02:20:20.166128 | orchestrator | Wednesday 04 February 2026 02:18:40 +0000 (0:00:02.709) 0:01:06.817 **** 2026-02-04 02:20:20.166140 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:20.166151 | orchestrator | 2026-02-04 02:20:20.166162 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-04 02:20:20.166173 | orchestrator | Wednesday 04 February 2026 02:18:40 +0000 (0:00:00.131) 0:01:06.948 **** 2026-02-04 02:20:20.166184 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:20.166195 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:20.166206 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:20.166217 | orchestrator | 2026-02-04 02:20:20.166227 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-04 02:20:20.166238 | orchestrator | Wednesday 04 February 2026 02:18:40 +0000 (0:00:00.319) 0:01:07.268 **** 2026-02-04 02:20:20.166249 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:20.166260 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-04 02:20:20.166271 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:20:20.166282 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:20:20.166292 | orchestrator | 2026-02-04 02:20:20.166303 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-04 02:20:20.166315 | orchestrator | skipping: no hosts matched 2026-02-04 02:20:20.166325 | orchestrator | 2026-02-04 02:20:20.166335 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-04 02:20:20.166346 | orchestrator | 2026-02-04 02:20:20.166356 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 02:20:20.166367 | orchestrator | Wednesday 04 February 2026 02:18:41 +0000 (0:00:00.567) 0:01:07.836 **** 2026-02-04 02:20:20.166378 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:20:20.166389 | orchestrator | 2026-02-04 02:20:20.166400 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 02:20:20.166410 | orchestrator | Wednesday 04 February 2026 02:19:05 +0000 (0:00:24.306) 0:01:32.143 **** 2026-02-04 02:20:20.166421 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:20:20.166432 | orchestrator | 2026-02-04 02:20:20.166443 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 02:20:20.166455 | orchestrator | Wednesday 04 February 2026 02:19:16 +0000 (0:00:11.556) 0:01:43.699 **** 2026-02-04 02:20:20.166467 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:20:20.166478 | orchestrator | 2026-02-04 02:20:20.166493 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-04 02:20:20.166504 | orchestrator | 2026-02-04 02:20:20.166555 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 02:20:20.166568 | orchestrator | Wednesday 04 February 2026 02:19:19 +0000 (0:00:02.629) 0:01:46.329 **** 2026-02-04 02:20:20.166590 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:20:20.166600 | orchestrator | 2026-02-04 02:20:20.166611 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 02:20:20.166622 | orchestrator | Wednesday 04 February 2026 02:19:38 +0000 (0:00:19.036) 0:02:05.365 **** 2026-02-04 02:20:20.166633 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:20:20.166644 | orchestrator | 2026-02-04 02:20:20.166655 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 02:20:20.166667 | orchestrator | Wednesday 04 February 2026 02:19:55 +0000 (0:00:16.590) 0:02:21.956 **** 2026-02-04 02:20:20.166678 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:20:20.166688 | orchestrator | 2026-02-04 02:20:20.166699 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-04 02:20:20.166709 | orchestrator | 2026-02-04 02:20:20.166721 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 02:20:20.166733 | orchestrator | Wednesday 04 February 2026 02:19:57 +0000 (0:00:02.727) 0:02:24.683 **** 2026-02-04 02:20:20.166745 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:20.166756 | orchestrator | 2026-02-04 02:20:20.166766 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 02:20:20.166777 | orchestrator | Wednesday 04 February 2026 02:20:11 +0000 (0:00:13.254) 0:02:37.938 **** 2026-02-04 02:20:20.166789 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.166799 | orchestrator | 2026-02-04 02:20:20.166810 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 02:20:20.166821 | orchestrator | Wednesday 04 February 2026 02:20:16 +0000 (0:00:05.572) 0:02:43.510 **** 2026-02-04 02:20:20.166832 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:20.166844 | orchestrator | 2026-02-04 02:20:20.166851 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-04 02:20:20.166858 | orchestrator | 2026-02-04 02:20:20.166864 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-04 02:20:20.166871 | orchestrator | Wednesday 04 February 2026 02:20:19 +0000 (0:00:02.693) 0:02:46.204 **** 2026-02-04 02:20:20.166878 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:20:20.166885 | orchestrator | 2026-02-04 02:20:20.166892 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-04 02:20:20.166911 | orchestrator | Wednesday 04 February 2026 02:20:20 +0000 (0:00:00.752) 0:02:46.956 **** 2026-02-04 02:20:33.265733 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:33.265847 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:33.265863 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:33.265874 | orchestrator | 2026-02-04 02:20:33.265885 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-04 02:20:33.265895 | orchestrator | Wednesday 04 February 2026 02:20:22 +0000 (0:00:02.373) 0:02:49.330 **** 2026-02-04 02:20:33.265906 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:33.265915 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:33.265924 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:33.265932 | orchestrator | 2026-02-04 02:20:33.265941 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-04 02:20:33.265950 | orchestrator | Wednesday 04 February 2026 02:20:24 +0000 (0:00:02.125) 0:02:51.455 **** 2026-02-04 02:20:33.265959 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:33.265967 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:33.265977 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:33.265987 | orchestrator | 2026-02-04 02:20:33.265996 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-04 02:20:33.266004 | orchestrator | Wednesday 04 February 2026 02:20:27 +0000 (0:00:02.451) 0:02:53.907 **** 2026-02-04 02:20:33.266058 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:33.266070 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:33.266079 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:20:33.266088 | orchestrator | 2026-02-04 02:20:33.266160 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-04 02:20:33.266171 | orchestrator | Wednesday 04 February 2026 02:20:29 +0000 (0:00:02.212) 0:02:56.119 **** 2026-02-04 02:20:33.266180 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:33.266190 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:20:33.266201 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:20:33.266211 | orchestrator | 2026-02-04 02:20:33.266221 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-04 02:20:33.266230 | orchestrator | Wednesday 04 February 2026 02:20:32 +0000 (0:00:03.124) 0:02:59.244 **** 2026-02-04 02:20:33.266239 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:33.266248 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:20:33.266257 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:20:33.266267 | orchestrator | 2026-02-04 02:20:33.266275 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:20:33.266286 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-04 02:20:33.266297 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-04 02:20:33.266306 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-04 02:20:33.266315 | orchestrator | 2026-02-04 02:20:33.266324 | orchestrator | 2026-02-04 02:20:33.266333 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:20:33.266342 | orchestrator | Wednesday 04 February 2026 02:20:32 +0000 (0:00:00.230) 0:02:59.475 **** 2026-02-04 02:20:33.266351 | orchestrator | =============================================================================== 2026-02-04 02:20:33.266373 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.34s 2026-02-04 02:20:33.266382 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.15s 2026-02-04 02:20:33.266391 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.25s 2026-02-04 02:20:33.266401 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.87s 2026-02-04 02:20:33.266409 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.20s 2026-02-04 02:20:33.266419 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.33s 2026-02-04 02:20:33.266431 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.57s 2026-02-04 02:20:33.266440 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.36s 2026-02-04 02:20:33.266450 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.02s 2026-02-04 02:20:33.266459 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.21s 2026-02-04 02:20:33.266469 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.12s 2026-02-04 02:20:33.266478 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.10s 2026-02-04 02:20:33.266487 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.77s 2026-02-04 02:20:33.266496 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.76s 2026-02-04 02:20:33.266507 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.71s 2026-02-04 02:20:33.266516 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.69s 2026-02-04 02:20:33.266525 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.45s 2026-02-04 02:20:33.266534 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.42s 2026-02-04 02:20:33.266542 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.37s 2026-02-04 02:20:33.266552 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.36s 2026-02-04 02:20:35.969863 | orchestrator | 2026-02-04 02:20:35 | INFO  | Task 294f69ad-10fc-470f-8550-976d25b6b8c7 (rabbitmq) was prepared for execution. 2026-02-04 02:20:35.969932 | orchestrator | 2026-02-04 02:20:35 | INFO  | It takes a moment until task 294f69ad-10fc-470f-8550-976d25b6b8c7 (rabbitmq) has been started and output is visible here. 2026-02-04 02:20:50.113759 | orchestrator | 2026-02-04 02:20:50.113917 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:20:50.113926 | orchestrator | 2026-02-04 02:20:50.113931 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:20:50.113936 | orchestrator | Wednesday 04 February 2026 02:20:40 +0000 (0:00:00.193) 0:00:00.193 **** 2026-02-04 02:20:50.113941 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:50.113947 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:20:50.113951 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:20:50.113955 | orchestrator | 2026-02-04 02:20:50.113959 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:20:50.113964 | orchestrator | Wednesday 04 February 2026 02:20:40 +0000 (0:00:00.310) 0:00:00.504 **** 2026-02-04 02:20:50.113968 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-04 02:20:50.113973 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-04 02:20:50.113977 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-04 02:20:50.113981 | orchestrator | 2026-02-04 02:20:50.113985 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-04 02:20:50.113990 | orchestrator | 2026-02-04 02:20:50.113994 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 02:20:50.114004 | orchestrator | Wednesday 04 February 2026 02:20:41 +0000 (0:00:00.601) 0:00:01.106 **** 2026-02-04 02:20:50.114008 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:20:50.114058 | orchestrator | 2026-02-04 02:20:50.114065 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-04 02:20:50.114073 | orchestrator | Wednesday 04 February 2026 02:20:42 +0000 (0:00:00.587) 0:00:01.694 **** 2026-02-04 02:20:50.114079 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:50.114086 | orchestrator | 2026-02-04 02:20:50.114092 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-04 02:20:50.114100 | orchestrator | Wednesday 04 February 2026 02:20:43 +0000 (0:00:01.003) 0:00:02.698 **** 2026-02-04 02:20:50.114163 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:50.114172 | orchestrator | 2026-02-04 02:20:50.114176 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-04 02:20:50.114181 | orchestrator | Wednesday 04 February 2026 02:20:43 +0000 (0:00:00.401) 0:00:03.099 **** 2026-02-04 02:20:50.114185 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:50.114189 | orchestrator | 2026-02-04 02:20:50.114194 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-04 02:20:50.114198 | orchestrator | Wednesday 04 February 2026 02:20:43 +0000 (0:00:00.396) 0:00:03.496 **** 2026-02-04 02:20:50.114202 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:50.114206 | orchestrator | 2026-02-04 02:20:50.114210 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-04 02:20:50.114214 | orchestrator | Wednesday 04 February 2026 02:20:44 +0000 (0:00:00.415) 0:00:03.912 **** 2026-02-04 02:20:50.114218 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:50.114222 | orchestrator | 2026-02-04 02:20:50.114227 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 02:20:50.114231 | orchestrator | Wednesday 04 February 2026 02:20:45 +0000 (0:00:00.650) 0:00:04.562 **** 2026-02-04 02:20:50.114248 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:20:50.114267 | orchestrator | 2026-02-04 02:20:50.114271 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-04 02:20:50.114275 | orchestrator | Wednesday 04 February 2026 02:20:45 +0000 (0:00:00.957) 0:00:05.519 **** 2026-02-04 02:20:50.114279 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:20:50.114283 | orchestrator | 2026-02-04 02:20:50.114287 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-04 02:20:50.114291 | orchestrator | Wednesday 04 February 2026 02:20:46 +0000 (0:00:00.883) 0:00:06.403 **** 2026-02-04 02:20:50.114296 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:50.114300 | orchestrator | 2026-02-04 02:20:50.114304 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-04 02:20:50.114309 | orchestrator | Wednesday 04 February 2026 02:20:47 +0000 (0:00:00.421) 0:00:06.824 **** 2026-02-04 02:20:50.114313 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:20:50.114318 | orchestrator | 2026-02-04 02:20:50.114322 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-04 02:20:50.114327 | orchestrator | Wednesday 04 February 2026 02:20:47 +0000 (0:00:00.402) 0:00:07.226 **** 2026-02-04 02:20:50.114350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:20:50.114357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:20:50.114362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:20:50.114372 | orchestrator | 2026-02-04 02:20:50.114380 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-04 02:20:50.114384 | orchestrator | Wednesday 04 February 2026 02:20:48 +0000 (0:00:00.823) 0:00:08.050 **** 2026-02-04 02:20:50.114389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:20:50.114399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:21:08.242551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:21:08.242699 | orchestrator | 2026-02-04 02:21:08.242730 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-04 02:21:08.242753 | orchestrator | Wednesday 04 February 2026 02:20:50 +0000 (0:00:01.583) 0:00:09.633 **** 2026-02-04 02:21:08.242808 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 02:21:08.242830 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 02:21:08.242850 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 02:21:08.242868 | orchestrator | 2026-02-04 02:21:08.242885 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-04 02:21:08.242903 | orchestrator | Wednesday 04 February 2026 02:20:51 +0000 (0:00:01.471) 0:00:11.104 **** 2026-02-04 02:21:08.242931 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 02:21:08.242943 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 02:21:08.242954 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 02:21:08.242965 | orchestrator | 2026-02-04 02:21:08.242976 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-04 02:21:08.242987 | orchestrator | Wednesday 04 February 2026 02:20:53 +0000 (0:00:01.779) 0:00:12.884 **** 2026-02-04 02:21:08.242998 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 02:21:08.243009 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 02:21:08.243023 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 02:21:08.243036 | orchestrator | 2026-02-04 02:21:08.243050 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-04 02:21:08.243063 | orchestrator | Wednesday 04 February 2026 02:20:54 +0000 (0:00:01.352) 0:00:14.236 **** 2026-02-04 02:21:08.243077 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 02:21:08.243090 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 02:21:08.243103 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 02:21:08.243116 | orchestrator | 2026-02-04 02:21:08.243178 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-04 02:21:08.243198 | orchestrator | Wednesday 04 February 2026 02:20:56 +0000 (0:00:01.646) 0:00:15.882 **** 2026-02-04 02:21:08.243218 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 02:21:08.243238 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 02:21:08.243259 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 02:21:08.243273 | orchestrator | 2026-02-04 02:21:08.243286 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-04 02:21:08.243300 | orchestrator | Wednesday 04 February 2026 02:20:57 +0000 (0:00:01.389) 0:00:17.272 **** 2026-02-04 02:21:08.243313 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 02:21:08.243326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 02:21:08.243339 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 02:21:08.243352 | orchestrator | 2026-02-04 02:21:08.243365 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 02:21:08.243378 | orchestrator | Wednesday 04 February 2026 02:20:59 +0000 (0:00:01.353) 0:00:18.625 **** 2026-02-04 02:21:08.243389 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:21:08.243402 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:21:08.243435 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:21:08.243458 | orchestrator | 2026-02-04 02:21:08.243469 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-04 02:21:08.243480 | orchestrator | Wednesday 04 February 2026 02:20:59 +0000 (0:00:00.424) 0:00:19.050 **** 2026-02-04 02:21:08.243494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:21:08.243515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:21:08.243529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 02:21:08.243541 | orchestrator | 2026-02-04 02:21:08.243552 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-04 02:21:08.243563 | orchestrator | Wednesday 04 February 2026 02:21:00 +0000 (0:00:01.195) 0:00:20.246 **** 2026-02-04 02:21:08.243574 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:21:08.243585 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:21:08.243596 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:21:08.243607 | orchestrator | 2026-02-04 02:21:08.243618 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-04 02:21:08.243635 | orchestrator | Wednesday 04 February 2026 02:21:01 +0000 (0:00:00.786) 0:00:21.032 **** 2026-02-04 02:21:08.243646 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:21:08.243657 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:21:08.243668 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:21:08.243679 | orchestrator | 2026-02-04 02:21:08.243690 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-04 02:21:08.243708 | orchestrator | Wednesday 04 February 2026 02:21:08 +0000 (0:00:06.726) 0:00:27.759 **** 2026-02-04 02:22:43.038307 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:22:43.038426 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:22:43.038444 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:22:43.038456 | orchestrator | 2026-02-04 02:22:43.038470 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 02:22:43.038483 | orchestrator | 2026-02-04 02:22:43.038494 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 02:22:43.038506 | orchestrator | Wednesday 04 February 2026 02:21:08 +0000 (0:00:00.549) 0:00:28.308 **** 2026-02-04 02:22:43.038517 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:22:43.038529 | orchestrator | 2026-02-04 02:22:43.038541 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 02:22:43.038552 | orchestrator | Wednesday 04 February 2026 02:21:09 +0000 (0:00:00.606) 0:00:28.915 **** 2026-02-04 02:22:43.038563 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:22:43.038574 | orchestrator | 2026-02-04 02:22:43.038585 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 02:22:43.038596 | orchestrator | Wednesday 04 February 2026 02:21:09 +0000 (0:00:00.283) 0:00:29.198 **** 2026-02-04 02:22:43.038607 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:22:43.038618 | orchestrator | 2026-02-04 02:22:43.038629 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 02:22:43.038640 | orchestrator | Wednesday 04 February 2026 02:21:11 +0000 (0:00:01.674) 0:00:30.873 **** 2026-02-04 02:22:43.038651 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:22:43.038663 | orchestrator | 2026-02-04 02:22:43.038674 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 02:22:43.038685 | orchestrator | 2026-02-04 02:22:43.038696 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 02:22:43.038707 | orchestrator | Wednesday 04 February 2026 02:22:05 +0000 (0:00:54.460) 0:01:25.333 **** 2026-02-04 02:22:43.038718 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:22:43.038729 | orchestrator | 2026-02-04 02:22:43.038740 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 02:22:43.038751 | orchestrator | Wednesday 04 February 2026 02:22:06 +0000 (0:00:00.602) 0:01:25.936 **** 2026-02-04 02:22:43.038762 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:22:43.038773 | orchestrator | 2026-02-04 02:22:43.038786 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 02:22:43.038799 | orchestrator | Wednesday 04 February 2026 02:22:06 +0000 (0:00:00.240) 0:01:26.176 **** 2026-02-04 02:22:43.038813 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:22:43.038826 | orchestrator | 2026-02-04 02:22:43.038840 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 02:22:43.038869 | orchestrator | Wednesday 04 February 2026 02:22:08 +0000 (0:00:01.628) 0:01:27.805 **** 2026-02-04 02:22:43.038884 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:22:43.038896 | orchestrator | 2026-02-04 02:22:43.038910 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 02:22:43.038924 | orchestrator | 2026-02-04 02:22:43.038937 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 02:22:43.038950 | orchestrator | Wednesday 04 February 2026 02:22:21 +0000 (0:00:13.716) 0:01:41.521 **** 2026-02-04 02:22:43.038963 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:22:43.038977 | orchestrator | 2026-02-04 02:22:43.039016 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 02:22:43.039028 | orchestrator | Wednesday 04 February 2026 02:22:22 +0000 (0:00:00.753) 0:01:42.274 **** 2026-02-04 02:22:43.039039 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:22:43.039050 | orchestrator | 2026-02-04 02:22:43.039061 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 02:22:43.039072 | orchestrator | Wednesday 04 February 2026 02:22:22 +0000 (0:00:00.240) 0:01:42.515 **** 2026-02-04 02:22:43.039083 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:22:43.039094 | orchestrator | 2026-02-04 02:22:43.039105 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 02:22:43.039116 | orchestrator | Wednesday 04 February 2026 02:22:24 +0000 (0:00:01.669) 0:01:44.185 **** 2026-02-04 02:22:43.039127 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:22:43.039138 | orchestrator | 2026-02-04 02:22:43.039149 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-04 02:22:43.039160 | orchestrator | 2026-02-04 02:22:43.039171 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-04 02:22:43.039207 | orchestrator | Wednesday 04 February 2026 02:22:39 +0000 (0:00:14.957) 0:01:59.142 **** 2026-02-04 02:22:43.039219 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:22:43.039230 | orchestrator | 2026-02-04 02:22:43.039241 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-04 02:22:43.039252 | orchestrator | Wednesday 04 February 2026 02:22:40 +0000 (0:00:00.598) 0:01:59.741 **** 2026-02-04 02:22:43.039263 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-04 02:22:43.039274 | orchestrator | enable_outward_rabbitmq_True 2026-02-04 02:22:43.039285 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-04 02:22:43.039296 | orchestrator | outward_rabbitmq_restart 2026-02-04 02:22:43.039307 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:22:43.039318 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:22:43.039329 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:22:43.039340 | orchestrator | 2026-02-04 02:22:43.039351 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-04 02:22:43.039362 | orchestrator | skipping: no hosts matched 2026-02-04 02:22:43.039373 | orchestrator | 2026-02-04 02:22:43.039384 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-04 02:22:43.039395 | orchestrator | skipping: no hosts matched 2026-02-04 02:22:43.039406 | orchestrator | 2026-02-04 02:22:43.039417 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-04 02:22:43.039428 | orchestrator | skipping: no hosts matched 2026-02-04 02:22:43.039439 | orchestrator | 2026-02-04 02:22:43.039450 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:22:43.039480 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-04 02:22:43.039494 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:22:43.039505 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:22:43.039516 | orchestrator | 2026-02-04 02:22:43.039527 | orchestrator | 2026-02-04 02:22:43.039539 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:22:43.039550 | orchestrator | Wednesday 04 February 2026 02:22:42 +0000 (0:00:02.423) 0:02:02.165 **** 2026-02-04 02:22:43.039560 | orchestrator | =============================================================================== 2026-02-04 02:22:43.039571 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 83.13s 2026-02-04 02:22:43.039582 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.73s 2026-02-04 02:22:43.039603 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.97s 2026-02-04 02:22:43.039614 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.42s 2026-02-04 02:22:43.039625 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.96s 2026-02-04 02:22:43.039636 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.78s 2026-02-04 02:22:43.039646 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.65s 2026-02-04 02:22:43.039657 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.58s 2026-02-04 02:22:43.039668 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.47s 2026-02-04 02:22:43.039679 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.39s 2026-02-04 02:22:43.039689 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.35s 2026-02-04 02:22:43.039700 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.35s 2026-02-04 02:22:43.039711 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.20s 2026-02-04 02:22:43.039722 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2026-02-04 02:22:43.039739 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.96s 2026-02-04 02:22:43.039750 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.88s 2026-02-04 02:22:43.039761 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.82s 2026-02-04 02:22:43.039772 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.79s 2026-02-04 02:22:43.039783 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.76s 2026-02-04 02:22:43.039794 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 0.65s 2026-02-04 02:22:45.661453 | orchestrator | 2026-02-04 02:22:45 | INFO  | Task f9d1687c-5653-4291-bb63-057036ee28a2 (openvswitch) was prepared for execution. 2026-02-04 02:22:45.661537 | orchestrator | 2026-02-04 02:22:45 | INFO  | It takes a moment until task f9d1687c-5653-4291-bb63-057036ee28a2 (openvswitch) has been started and output is visible here. 2026-02-04 02:22:58.938436 | orchestrator | 2026-02-04 02:22:58.938579 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:22:58.938605 | orchestrator | 2026-02-04 02:22:58.938623 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:22:58.938642 | orchestrator | Wednesday 04 February 2026 02:22:50 +0000 (0:00:00.271) 0:00:00.271 **** 2026-02-04 02:22:58.938660 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:22:58.938679 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:22:58.938697 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:22:58.938715 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:22:58.938733 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:22:58.938751 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:22:58.938768 | orchestrator | 2026-02-04 02:22:58.938785 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:22:58.938803 | orchestrator | Wednesday 04 February 2026 02:22:50 +0000 (0:00:00.749) 0:00:01.020 **** 2026-02-04 02:22:58.938822 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 02:22:58.938843 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 02:22:58.938863 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 02:22:58.938883 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 02:22:58.938903 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 02:22:58.938922 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 02:22:58.938942 | orchestrator | 2026-02-04 02:22:58.938997 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-04 02:22:58.939018 | orchestrator | 2026-02-04 02:22:58.939040 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-04 02:22:58.939058 | orchestrator | Wednesday 04 February 2026 02:22:51 +0000 (0:00:00.660) 0:00:01.680 **** 2026-02-04 02:22:58.939079 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:22:58.939101 | orchestrator | 2026-02-04 02:22:58.939121 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 02:22:58.939141 | orchestrator | Wednesday 04 February 2026 02:22:52 +0000 (0:00:01.247) 0:00:02.928 **** 2026-02-04 02:22:58.939161 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-04 02:22:58.939182 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-04 02:22:58.939346 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-04 02:22:58.939369 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-04 02:22:58.939389 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-04 02:22:58.939409 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-04 02:22:58.939429 | orchestrator | 2026-02-04 02:22:58.939449 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 02:22:58.939469 | orchestrator | Wednesday 04 February 2026 02:22:53 +0000 (0:00:01.191) 0:00:04.120 **** 2026-02-04 02:22:58.939487 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-04 02:22:58.939505 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-04 02:22:58.939523 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-04 02:22:58.939540 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-04 02:22:58.939557 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-04 02:22:58.939573 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-04 02:22:58.939589 | orchestrator | 2026-02-04 02:22:58.939606 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 02:22:58.939623 | orchestrator | Wednesday 04 February 2026 02:22:55 +0000 (0:00:01.485) 0:00:05.605 **** 2026-02-04 02:22:58.939641 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-04 02:22:58.939658 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:22:58.939677 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-04 02:22:58.939695 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:22:58.939712 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-04 02:22:58.939731 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:22:58.939748 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-04 02:22:58.939766 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:22:58.939783 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-04 02:22:58.939803 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:22:58.939822 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-04 02:22:58.939842 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:22:58.939861 | orchestrator | 2026-02-04 02:22:58.939881 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-04 02:22:58.939899 | orchestrator | Wednesday 04 February 2026 02:22:56 +0000 (0:00:01.251) 0:00:06.856 **** 2026-02-04 02:22:58.939918 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:22:58.939936 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:22:58.939954 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:22:58.939972 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:22:58.940022 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:22:58.940042 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:22:58.940062 | orchestrator | 2026-02-04 02:22:58.940083 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-04 02:22:58.940126 | orchestrator | Wednesday 04 February 2026 02:22:57 +0000 (0:00:00.765) 0:00:07.622 **** 2026-02-04 02:22:58.940171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:22:58.940219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:22:58.940238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:22:58.940335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:22:58.940364 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:22:58.940387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283558 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283573 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283676 | orchestrator | 2026-02-04 02:23:01.283692 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-04 02:23:01.283705 | orchestrator | Wednesday 04 February 2026 02:22:59 +0000 (0:00:01.551) 0:00:09.174 **** 2026-02-04 02:23:01.283718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283806 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:01.283828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:04.272891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:04.272995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273076 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273082 | orchestrator | 2026-02-04 02:23:04.273088 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-04 02:23:04.273094 | orchestrator | Wednesday 04 February 2026 02:23:01 +0000 (0:00:02.322) 0:00:11.496 **** 2026-02-04 02:23:04.273098 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:23:04.273104 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:23:04.273108 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:23:04.273112 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:23:04.273116 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:23:04.273120 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:23:04.273125 | orchestrator | 2026-02-04 02:23:04.273129 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-04 02:23:04.273133 | orchestrator | Wednesday 04 February 2026 02:23:02 +0000 (0:00:01.016) 0:00:12.513 **** 2026-02-04 02:23:04.273137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:04.273169 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:29.558374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 02:23:29.558500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:29.558520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:29.558574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:29.558587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:29.558616 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:29.558627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 02:23:29.558637 | orchestrator | 2026-02-04 02:23:29.558648 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 02:23:29.558659 | orchestrator | Wednesday 04 February 2026 02:23:04 +0000 (0:00:01.986) 0:00:14.499 **** 2026-02-04 02:23:29.558669 | orchestrator | 2026-02-04 02:23:29.558679 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 02:23:29.558689 | orchestrator | Wednesday 04 February 2026 02:23:04 +0000 (0:00:00.345) 0:00:14.844 **** 2026-02-04 02:23:29.558709 | orchestrator | 2026-02-04 02:23:29.558720 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 02:23:29.558731 | orchestrator | Wednesday 04 February 2026 02:23:04 +0000 (0:00:00.149) 0:00:14.993 **** 2026-02-04 02:23:29.558740 | orchestrator | 2026-02-04 02:23:29.558750 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 02:23:29.558760 | orchestrator | Wednesday 04 February 2026 02:23:04 +0000 (0:00:00.133) 0:00:15.127 **** 2026-02-04 02:23:29.558769 | orchestrator | 2026-02-04 02:23:29.558779 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 02:23:29.558788 | orchestrator | Wednesday 04 February 2026 02:23:05 +0000 (0:00:00.134) 0:00:15.261 **** 2026-02-04 02:23:29.558798 | orchestrator | 2026-02-04 02:23:29.558809 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 02:23:29.558819 | orchestrator | Wednesday 04 February 2026 02:23:05 +0000 (0:00:00.140) 0:00:15.402 **** 2026-02-04 02:23:29.558829 | orchestrator | 2026-02-04 02:23:29.558840 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-04 02:23:29.558851 | orchestrator | Wednesday 04 February 2026 02:23:05 +0000 (0:00:00.141) 0:00:15.543 **** 2026-02-04 02:23:29.558862 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:23:29.558875 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:23:29.558885 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:23:29.558896 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:23:29.558906 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:23:29.558917 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:23:29.558927 | orchestrator | 2026-02-04 02:23:29.558937 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-04 02:23:29.558949 | orchestrator | Wednesday 04 February 2026 02:23:14 +0000 (0:00:08.840) 0:00:24.384 **** 2026-02-04 02:23:29.558960 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:23:29.558978 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:23:29.558989 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:23:29.559000 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:23:29.559009 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:23:29.559019 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:23:29.559029 | orchestrator | 2026-02-04 02:23:29.559039 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-04 02:23:29.559049 | orchestrator | Wednesday 04 February 2026 02:23:15 +0000 (0:00:01.056) 0:00:25.440 **** 2026-02-04 02:23:29.559060 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:23:29.559070 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:23:29.559080 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:23:29.559090 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:23:29.559100 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:23:29.559110 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:23:29.559120 | orchestrator | 2026-02-04 02:23:29.559129 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-04 02:23:29.559139 | orchestrator | Wednesday 04 February 2026 02:23:23 +0000 (0:00:08.100) 0:00:33.541 **** 2026-02-04 02:23:29.559150 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-04 02:23:29.559161 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-04 02:23:29.559172 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-04 02:23:29.559183 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-04 02:23:29.559193 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-04 02:23:29.559204 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-04 02:23:29.559237 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-04 02:23:29.559267 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-04 02:23:42.752058 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-04 02:23:42.752175 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-04 02:23:42.752193 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-04 02:23:42.752205 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-04 02:23:42.752313 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 02:23:42.752331 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 02:23:42.752350 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 02:23:42.752370 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 02:23:42.752390 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 02:23:42.752409 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 02:23:42.752422 | orchestrator | 2026-02-04 02:23:42.752435 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-04 02:23:42.752448 | orchestrator | Wednesday 04 February 2026 02:23:29 +0000 (0:00:06.145) 0:00:39.686 **** 2026-02-04 02:23:42.752461 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-04 02:23:42.752473 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:23:42.752486 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-04 02:23:42.752497 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:23:42.752508 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-04 02:23:42.752524 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:23:42.752542 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-04 02:23:42.752561 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-04 02:23:42.752578 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-04 02:23:42.752597 | orchestrator | 2026-02-04 02:23:42.752622 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-04 02:23:42.752646 | orchestrator | Wednesday 04 February 2026 02:23:31 +0000 (0:00:02.405) 0:00:42.091 **** 2026-02-04 02:23:42.752665 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-04 02:23:42.752684 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:23:42.752704 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-04 02:23:42.752721 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:23:42.752733 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-04 02:23:42.752744 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:23:42.752755 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-04 02:23:42.752766 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-04 02:23:42.752795 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-04 02:23:42.752806 | orchestrator | 2026-02-04 02:23:42.752817 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-04 02:23:42.752828 | orchestrator | Wednesday 04 February 2026 02:23:34 +0000 (0:00:03.041) 0:00:45.133 **** 2026-02-04 02:23:42.752839 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:23:42.752850 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:23:42.752924 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:23:42.752945 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:23:42.752963 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:23:42.752981 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:23:42.753000 | orchestrator | 2026-02-04 02:23:42.753017 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:23:42.753038 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 02:23:42.753059 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 02:23:42.753077 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 02:23:42.753095 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 02:23:42.753114 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 02:23:42.753132 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 02:23:42.753152 | orchestrator | 2026-02-04 02:23:42.753171 | orchestrator | 2026-02-04 02:23:42.753190 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:23:42.753208 | orchestrator | Wednesday 04 February 2026 02:23:42 +0000 (0:00:07.298) 0:00:52.432 **** 2026-02-04 02:23:42.753282 | orchestrator | =============================================================================== 2026-02-04 02:23:42.753302 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.40s 2026-02-04 02:23:42.753321 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.84s 2026-02-04 02:23:42.753339 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.15s 2026-02-04 02:23:42.753358 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.04s 2026-02-04 02:23:42.753377 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.41s 2026-02-04 02:23:42.753396 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.32s 2026-02-04 02:23:42.753415 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.99s 2026-02-04 02:23:42.753433 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.55s 2026-02-04 02:23:42.753450 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.49s 2026-02-04 02:23:42.753469 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.25s 2026-02-04 02:23:42.753487 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.25s 2026-02-04 02:23:42.753506 | orchestrator | module-load : Load modules ---------------------------------------------- 1.19s 2026-02-04 02:23:42.753524 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.06s 2026-02-04 02:23:42.753542 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.04s 2026-02-04 02:23:42.753560 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.02s 2026-02-04 02:23:42.753579 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.77s 2026-02-04 02:23:42.753597 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.75s 2026-02-04 02:23:42.753615 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2026-02-04 02:23:45.339641 | orchestrator | 2026-02-04 02:23:45 | INFO  | Task 6a9beb10-81f2-4ce5-8610-3a17d608d17e (ovn) was prepared for execution. 2026-02-04 02:23:45.339717 | orchestrator | 2026-02-04 02:23:45 | INFO  | It takes a moment until task 6a9beb10-81f2-4ce5-8610-3a17d608d17e (ovn) has been started and output is visible here. 2026-02-04 02:23:56.492871 | orchestrator | 2026-02-04 02:23:56.492992 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:23:56.493010 | orchestrator | 2026-02-04 02:23:56.493023 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:23:56.493034 | orchestrator | Wednesday 04 February 2026 02:23:49 +0000 (0:00:00.178) 0:00:00.178 **** 2026-02-04 02:23:56.493046 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:23:56.493058 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:23:56.493069 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:23:56.493080 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:23:56.493091 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:23:56.493102 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:23:56.493113 | orchestrator | 2026-02-04 02:23:56.493124 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:23:56.493135 | orchestrator | Wednesday 04 February 2026 02:23:50 +0000 (0:00:00.741) 0:00:00.919 **** 2026-02-04 02:23:56.493164 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-04 02:23:56.493177 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-04 02:23:56.493188 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-04 02:23:56.493199 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-04 02:23:56.493210 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-04 02:23:56.493221 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-04 02:23:56.493262 | orchestrator | 2026-02-04 02:23:56.493274 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-04 02:23:56.493286 | orchestrator | 2026-02-04 02:23:56.493297 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-04 02:23:56.493308 | orchestrator | Wednesday 04 February 2026 02:23:51 +0000 (0:00:00.812) 0:00:01.732 **** 2026-02-04 02:23:56.493320 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:23:56.493332 | orchestrator | 2026-02-04 02:23:56.493343 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-04 02:23:56.493354 | orchestrator | Wednesday 04 February 2026 02:23:52 +0000 (0:00:01.203) 0:00:02.935 **** 2026-02-04 02:23:56.493368 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493482 | orchestrator | 2026-02-04 02:23:56.493493 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-04 02:23:56.493504 | orchestrator | Wednesday 04 February 2026 02:23:53 +0000 (0:00:01.231) 0:00:04.167 **** 2026-02-04 02:23:56.493522 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493534 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493545 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493598 | orchestrator | 2026-02-04 02:23:56.493609 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-04 02:23:56.493621 | orchestrator | Wednesday 04 February 2026 02:23:55 +0000 (0:00:01.524) 0:00:05.691 **** 2026-02-04 02:23:56.493632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:23:56.493663 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.921705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.921874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.921895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.921909 | orchestrator | 2026-02-04 02:24:19.921923 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-04 02:24:19.921935 | orchestrator | Wednesday 04 February 2026 02:23:56 +0000 (0:00:01.135) 0:00:06.827 **** 2026-02-04 02:24:19.921947 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.921959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.921995 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922131 | orchestrator | 2026-02-04 02:24:19.922143 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-04 02:24:19.922154 | orchestrator | Wednesday 04 February 2026 02:23:57 +0000 (0:00:01.509) 0:00:08.336 **** 2026-02-04 02:24:19.922174 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:24:19.922308 | orchestrator | 2026-02-04 02:24:19.922321 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-04 02:24:19.922332 | orchestrator | Wednesday 04 February 2026 02:23:59 +0000 (0:00:01.354) 0:00:09.691 **** 2026-02-04 02:24:19.922355 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:24:19.922368 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:24:19.922379 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:24:19.922390 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:24:19.922400 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:24:19.922411 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:24:19.922422 | orchestrator | 2026-02-04 02:24:19.922433 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-04 02:24:19.922444 | orchestrator | Wednesday 04 February 2026 02:24:01 +0000 (0:00:02.380) 0:00:12.072 **** 2026-02-04 02:24:19.922454 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-04 02:24:19.922466 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-04 02:24:19.922477 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-04 02:24:19.922487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-04 02:24:19.922498 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-04 02:24:19.922509 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-04 02:24:19.922528 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 02:24:58.163051 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 02:24:58.163135 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 02:24:58.163154 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 02:24:58.163159 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 02:24:58.163163 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 02:24:58.163167 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 02:24:58.163173 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 02:24:58.163193 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 02:24:58.163197 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 02:24:58.163201 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 02:24:58.163205 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-04 02:24:58.163209 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 02:24:58.163214 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 02:24:58.163218 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 02:24:58.163222 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 02:24:58.163226 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 02:24:58.163229 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 02:24:58.163233 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 02:24:58.163237 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 02:24:58.163241 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 02:24:58.163246 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 02:24:58.163251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 02:24:58.163257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 02:24:58.163331 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 02:24:58.163339 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 02:24:58.163346 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 02:24:58.163352 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 02:24:58.163358 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 02:24:58.163364 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 02:24:58.163370 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 02:24:58.163376 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 02:24:58.163382 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 02:24:58.163388 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 02:24:58.163395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 02:24:58.163401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 02:24:58.163405 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-04 02:24:58.163430 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-04 02:24:58.163434 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-04 02:24:58.163443 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-04 02:24:58.163447 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-04 02:24:58.163451 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-04 02:24:58.163455 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 02:24:58.163459 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 02:24:58.163462 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 02:24:58.163467 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 02:24:58.163473 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 02:24:58.163479 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 02:24:58.163485 | orchestrator | 2026-02-04 02:24:58.163492 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 02:24:58.163498 | orchestrator | Wednesday 04 February 2026 02:24:19 +0000 (0:00:17.637) 0:00:29.710 **** 2026-02-04 02:24:58.163505 | orchestrator | 2026-02-04 02:24:58.163511 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 02:24:58.163517 | orchestrator | Wednesday 04 February 2026 02:24:19 +0000 (0:00:00.215) 0:00:29.925 **** 2026-02-04 02:24:58.163523 | orchestrator | 2026-02-04 02:24:58.163530 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 02:24:58.163536 | orchestrator | Wednesday 04 February 2026 02:24:19 +0000 (0:00:00.063) 0:00:29.989 **** 2026-02-04 02:24:58.163541 | orchestrator | 2026-02-04 02:24:58.163548 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 02:24:58.163553 | orchestrator | Wednesday 04 February 2026 02:24:19 +0000 (0:00:00.064) 0:00:30.053 **** 2026-02-04 02:24:58.163557 | orchestrator | 2026-02-04 02:24:58.163561 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 02:24:58.163565 | orchestrator | Wednesday 04 February 2026 02:24:19 +0000 (0:00:00.064) 0:00:30.117 **** 2026-02-04 02:24:58.163568 | orchestrator | 2026-02-04 02:24:58.163572 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 02:24:58.163576 | orchestrator | Wednesday 04 February 2026 02:24:19 +0000 (0:00:00.070) 0:00:30.188 **** 2026-02-04 02:24:58.163579 | orchestrator | 2026-02-04 02:24:58.163583 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-04 02:24:58.163587 | orchestrator | Wednesday 04 February 2026 02:24:19 +0000 (0:00:00.066) 0:00:30.255 **** 2026-02-04 02:24:58.163591 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:24:58.163596 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:24:58.163599 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:24:58.163603 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:24:58.163607 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:24:58.163610 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:24:58.163614 | orchestrator | 2026-02-04 02:24:58.163618 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-04 02:24:58.163622 | orchestrator | Wednesday 04 February 2026 02:24:21 +0000 (0:00:01.542) 0:00:31.798 **** 2026-02-04 02:24:58.163630 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:24:58.163634 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:24:58.163637 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:24:58.163641 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:24:58.163645 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:24:58.163648 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:24:58.163652 | orchestrator | 2026-02-04 02:24:58.163656 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-04 02:24:58.163659 | orchestrator | 2026-02-04 02:24:58.163663 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 02:24:58.163667 | orchestrator | Wednesday 04 February 2026 02:24:55 +0000 (0:00:34.434) 0:01:06.233 **** 2026-02-04 02:24:58.163671 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:24:58.163674 | orchestrator | 2026-02-04 02:24:58.163678 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 02:24:58.163682 | orchestrator | Wednesday 04 February 2026 02:24:56 +0000 (0:00:00.703) 0:01:06.936 **** 2026-02-04 02:24:58.163686 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:24:58.163689 | orchestrator | 2026-02-04 02:24:58.163693 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-04 02:24:58.163697 | orchestrator | Wednesday 04 February 2026 02:24:57 +0000 (0:00:00.519) 0:01:07.455 **** 2026-02-04 02:24:58.163700 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:24:58.163704 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:24:58.163708 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:24:58.163711 | orchestrator | 2026-02-04 02:24:58.163715 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-04 02:24:58.163722 | orchestrator | Wednesday 04 February 2026 02:24:58 +0000 (0:00:01.022) 0:01:08.477 **** 2026-02-04 02:25:09.087735 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:09.087849 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:09.087866 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:09.087879 | orchestrator | 2026-02-04 02:25:09.087893 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-04 02:25:09.087924 | orchestrator | Wednesday 04 February 2026 02:24:58 +0000 (0:00:00.357) 0:01:08.835 **** 2026-02-04 02:25:09.087936 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:09.087948 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:09.087959 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:09.087970 | orchestrator | 2026-02-04 02:25:09.087982 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-04 02:25:09.087994 | orchestrator | Wednesday 04 February 2026 02:24:58 +0000 (0:00:00.346) 0:01:09.182 **** 2026-02-04 02:25:09.088005 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:09.088016 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:09.088027 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:09.088039 | orchestrator | 2026-02-04 02:25:09.088050 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-04 02:25:09.088061 | orchestrator | Wednesday 04 February 2026 02:24:59 +0000 (0:00:00.309) 0:01:09.491 **** 2026-02-04 02:25:09.088072 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:09.088084 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:09.088095 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:09.088106 | orchestrator | 2026-02-04 02:25:09.088118 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-04 02:25:09.088129 | orchestrator | Wednesday 04 February 2026 02:24:59 +0000 (0:00:00.519) 0:01:10.011 **** 2026-02-04 02:25:09.088140 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088153 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088164 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088176 | orchestrator | 2026-02-04 02:25:09.088187 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-04 02:25:09.088221 | orchestrator | Wednesday 04 February 2026 02:24:59 +0000 (0:00:00.322) 0:01:10.333 **** 2026-02-04 02:25:09.088233 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088244 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088255 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088299 | orchestrator | 2026-02-04 02:25:09.088321 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-04 02:25:09.088341 | orchestrator | Wednesday 04 February 2026 02:25:00 +0000 (0:00:00.314) 0:01:10.647 **** 2026-02-04 02:25:09.088358 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088372 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088386 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088399 | orchestrator | 2026-02-04 02:25:09.088412 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-04 02:25:09.088426 | orchestrator | Wednesday 04 February 2026 02:25:00 +0000 (0:00:00.301) 0:01:10.949 **** 2026-02-04 02:25:09.088439 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088452 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088465 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088477 | orchestrator | 2026-02-04 02:25:09.088491 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-04 02:25:09.088504 | orchestrator | Wednesday 04 February 2026 02:25:00 +0000 (0:00:00.287) 0:01:11.236 **** 2026-02-04 02:25:09.088518 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088532 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088545 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088558 | orchestrator | 2026-02-04 02:25:09.088569 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-04 02:25:09.088580 | orchestrator | Wednesday 04 February 2026 02:25:01 +0000 (0:00:00.490) 0:01:11.727 **** 2026-02-04 02:25:09.088591 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088602 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088613 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088624 | orchestrator | 2026-02-04 02:25:09.088635 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-04 02:25:09.088646 | orchestrator | Wednesday 04 February 2026 02:25:01 +0000 (0:00:00.305) 0:01:12.033 **** 2026-02-04 02:25:09.088657 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088668 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088679 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088690 | orchestrator | 2026-02-04 02:25:09.088701 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-04 02:25:09.088712 | orchestrator | Wednesday 04 February 2026 02:25:01 +0000 (0:00:00.300) 0:01:12.333 **** 2026-02-04 02:25:09.088723 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088734 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088745 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088755 | orchestrator | 2026-02-04 02:25:09.088767 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-04 02:25:09.088778 | orchestrator | Wednesday 04 February 2026 02:25:02 +0000 (0:00:00.316) 0:01:12.649 **** 2026-02-04 02:25:09.088788 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088799 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088810 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088838 | orchestrator | 2026-02-04 02:25:09.088850 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-04 02:25:09.088861 | orchestrator | Wednesday 04 February 2026 02:25:02 +0000 (0:00:00.498) 0:01:13.148 **** 2026-02-04 02:25:09.088872 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088883 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088894 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088906 | orchestrator | 2026-02-04 02:25:09.088917 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-04 02:25:09.088937 | orchestrator | Wednesday 04 February 2026 02:25:03 +0000 (0:00:00.330) 0:01:13.479 **** 2026-02-04 02:25:09.088949 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.088960 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.088971 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.088982 | orchestrator | 2026-02-04 02:25:09.088993 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-04 02:25:09.089004 | orchestrator | Wednesday 04 February 2026 02:25:03 +0000 (0:00:00.316) 0:01:13.795 **** 2026-02-04 02:25:09.089033 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.089045 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.089056 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.089067 | orchestrator | 2026-02-04 02:25:09.089078 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 02:25:09.089096 | orchestrator | Wednesday 04 February 2026 02:25:03 +0000 (0:00:00.298) 0:01:14.093 **** 2026-02-04 02:25:09.089108 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:25:09.089119 | orchestrator | 2026-02-04 02:25:09.089130 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-04 02:25:09.089141 | orchestrator | Wednesday 04 February 2026 02:25:04 +0000 (0:00:00.751) 0:01:14.844 **** 2026-02-04 02:25:09.089152 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:09.089163 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:09.089174 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:09.089185 | orchestrator | 2026-02-04 02:25:09.089196 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-04 02:25:09.089207 | orchestrator | Wednesday 04 February 2026 02:25:04 +0000 (0:00:00.425) 0:01:15.270 **** 2026-02-04 02:25:09.089218 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:09.089229 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:09.089240 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:09.089251 | orchestrator | 2026-02-04 02:25:09.089262 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-04 02:25:09.089299 | orchestrator | Wednesday 04 February 2026 02:25:05 +0000 (0:00:00.417) 0:01:15.688 **** 2026-02-04 02:25:09.089319 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.089338 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.089357 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.089376 | orchestrator | 2026-02-04 02:25:09.089388 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-04 02:25:09.089399 | orchestrator | Wednesday 04 February 2026 02:25:05 +0000 (0:00:00.370) 0:01:16.059 **** 2026-02-04 02:25:09.089410 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.089421 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.089432 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.089442 | orchestrator | 2026-02-04 02:25:09.089453 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-04 02:25:09.089464 | orchestrator | Wednesday 04 February 2026 02:25:06 +0000 (0:00:00.514) 0:01:16.574 **** 2026-02-04 02:25:09.089475 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.089486 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.089497 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.089507 | orchestrator | 2026-02-04 02:25:09.089518 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-04 02:25:09.089529 | orchestrator | Wednesday 04 February 2026 02:25:06 +0000 (0:00:00.355) 0:01:16.929 **** 2026-02-04 02:25:09.089541 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.089552 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.089562 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.089574 | orchestrator | 2026-02-04 02:25:09.089584 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-04 02:25:09.089595 | orchestrator | Wednesday 04 February 2026 02:25:06 +0000 (0:00:00.320) 0:01:17.250 **** 2026-02-04 02:25:09.089620 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.089631 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.089642 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.089652 | orchestrator | 2026-02-04 02:25:09.089663 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-04 02:25:09.089674 | orchestrator | Wednesday 04 February 2026 02:25:07 +0000 (0:00:00.351) 0:01:17.602 **** 2026-02-04 02:25:09.089685 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:09.089696 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:09.089707 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:09.089718 | orchestrator | 2026-02-04 02:25:09.089729 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-04 02:25:09.089739 | orchestrator | Wednesday 04 February 2026 02:25:07 +0000 (0:00:00.496) 0:01:18.099 **** 2026-02-04 02:25:09.089752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:09.089767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:09.089779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:09.089807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.743707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.743821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.743838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.743850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.743887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.743905 | orchestrator | 2026-02-04 02:25:14.743926 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-04 02:25:14.743947 | orchestrator | Wednesday 04 February 2026 02:25:09 +0000 (0:00:01.327) 0:01:19.426 **** 2026-02-04 02:25:14.743966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.743988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744192 | orchestrator | 2026-02-04 02:25:14.744204 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-04 02:25:14.744215 | orchestrator | Wednesday 04 February 2026 02:25:12 +0000 (0:00:03.446) 0:01:22.873 **** 2026-02-04 02:25:14.744226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:14.744400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.971213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.971465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.971499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.971522 | orchestrator | 2026-02-04 02:25:36.971543 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 02:25:36.971564 | orchestrator | Wednesday 04 February 2026 02:25:14 +0000 (0:00:01.779) 0:01:24.652 **** 2026-02-04 02:25:36.971583 | orchestrator | 2026-02-04 02:25:36.971605 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 02:25:36.971624 | orchestrator | Wednesday 04 February 2026 02:25:14 +0000 (0:00:00.078) 0:01:24.730 **** 2026-02-04 02:25:36.971643 | orchestrator | 2026-02-04 02:25:36.971663 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 02:25:36.971685 | orchestrator | Wednesday 04 February 2026 02:25:14 +0000 (0:00:00.066) 0:01:24.797 **** 2026-02-04 02:25:36.971705 | orchestrator | 2026-02-04 02:25:36.971726 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-04 02:25:36.971746 | orchestrator | Wednesday 04 February 2026 02:25:14 +0000 (0:00:00.278) 0:01:25.076 **** 2026-02-04 02:25:36.971767 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:25:36.971789 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:25:36.971809 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:25:36.971829 | orchestrator | 2026-02-04 02:25:36.971848 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-04 02:25:36.971867 | orchestrator | Wednesday 04 February 2026 02:25:17 +0000 (0:00:02.298) 0:01:27.374 **** 2026-02-04 02:25:36.971879 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:25:36.971890 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:25:36.971901 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:25:36.971912 | orchestrator | 2026-02-04 02:25:36.971923 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-04 02:25:36.971934 | orchestrator | Wednesday 04 February 2026 02:25:23 +0000 (0:00:06.493) 0:01:33.867 **** 2026-02-04 02:25:36.971945 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:25:36.971955 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:25:36.971966 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:25:36.971977 | orchestrator | 2026-02-04 02:25:36.971988 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-04 02:25:36.971999 | orchestrator | Wednesday 04 February 2026 02:25:30 +0000 (0:00:07.250) 0:01:41.118 **** 2026-02-04 02:25:36.972009 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:25:36.972020 | orchestrator | 2026-02-04 02:25:36.972032 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-04 02:25:36.972050 | orchestrator | Wednesday 04 February 2026 02:25:30 +0000 (0:00:00.144) 0:01:41.262 **** 2026-02-04 02:25:36.972068 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:36.972088 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:36.972105 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:36.972122 | orchestrator | 2026-02-04 02:25:36.972140 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-04 02:25:36.972156 | orchestrator | Wednesday 04 February 2026 02:25:31 +0000 (0:00:00.923) 0:01:42.186 **** 2026-02-04 02:25:36.972173 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:36.972208 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:36.972225 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:25:36.972242 | orchestrator | 2026-02-04 02:25:36.972261 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-04 02:25:36.972280 | orchestrator | Wednesday 04 February 2026 02:25:32 +0000 (0:00:00.593) 0:01:42.779 **** 2026-02-04 02:25:36.972372 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:36.972392 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:36.972411 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:36.972432 | orchestrator | 2026-02-04 02:25:36.972450 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-04 02:25:36.972491 | orchestrator | Wednesday 04 February 2026 02:25:33 +0000 (0:00:00.740) 0:01:43.520 **** 2026-02-04 02:25:36.972513 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:25:36.972531 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:25:36.972549 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:25:36.972567 | orchestrator | 2026-02-04 02:25:36.972579 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-04 02:25:36.972590 | orchestrator | Wednesday 04 February 2026 02:25:33 +0000 (0:00:00.567) 0:01:44.087 **** 2026-02-04 02:25:36.972601 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:36.972612 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:36.972647 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:36.972659 | orchestrator | 2026-02-04 02:25:36.972670 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-04 02:25:36.972681 | orchestrator | Wednesday 04 February 2026 02:25:34 +0000 (0:00:00.694) 0:01:44.782 **** 2026-02-04 02:25:36.972694 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:36.972712 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:36.972728 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:36.972743 | orchestrator | 2026-02-04 02:25:36.972761 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-04 02:25:36.972777 | orchestrator | Wednesday 04 February 2026 02:25:35 +0000 (0:00:00.931) 0:01:45.714 **** 2026-02-04 02:25:36.972797 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:25:36.972817 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:25:36.972835 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:25:36.972854 | orchestrator | 2026-02-04 02:25:36.972868 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-04 02:25:36.972879 | orchestrator | Wednesday 04 February 2026 02:25:35 +0000 (0:00:00.312) 0:01:46.026 **** 2026-02-04 02:25:36.972893 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.972908 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.972919 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.972930 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.972955 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.972967 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.972978 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.972996 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:36.973019 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351536 | orchestrator | 2026-02-04 02:25:43.351643 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-04 02:25:43.351658 | orchestrator | Wednesday 04 February 2026 02:25:36 +0000 (0:00:01.276) 0:01:47.302 **** 2026-02-04 02:25:43.351672 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351688 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351699 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351710 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351771 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351820 | orchestrator | 2026-02-04 02:25:43.351832 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-04 02:25:43.351843 | orchestrator | Wednesday 04 February 2026 02:25:40 +0000 (0:00:03.462) 0:01:50.765 **** 2026-02-04 02:25:43.351872 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351912 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351924 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351936 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.351990 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.352007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 02:25:43.352018 | orchestrator | 2026-02-04 02:25:43.352029 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 02:25:43.352040 | orchestrator | Wednesday 04 February 2026 02:25:43 +0000 (0:00:02.693) 0:01:53.458 **** 2026-02-04 02:25:43.352051 | orchestrator | 2026-02-04 02:25:43.352062 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 02:25:43.352073 | orchestrator | Wednesday 04 February 2026 02:25:43 +0000 (0:00:00.063) 0:01:53.522 **** 2026-02-04 02:25:43.352084 | orchestrator | 2026-02-04 02:25:43.352095 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 02:25:43.352106 | orchestrator | Wednesday 04 February 2026 02:25:43 +0000 (0:00:00.089) 0:01:53.612 **** 2026-02-04 02:25:43.352117 | orchestrator | 2026-02-04 02:25:43.352135 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-04 02:26:07.480890 | orchestrator | Wednesday 04 February 2026 02:25:43 +0000 (0:00:00.067) 0:01:53.679 **** 2026-02-04 02:26:07.480999 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:26:07.481014 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:26:07.481023 | orchestrator | 2026-02-04 02:26:07.481033 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-04 02:26:07.481042 | orchestrator | Wednesday 04 February 2026 02:25:49 +0000 (0:00:06.097) 0:01:59.776 **** 2026-02-04 02:26:07.481051 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:26:07.481060 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:26:07.481069 | orchestrator | 2026-02-04 02:26:07.481078 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-04 02:26:07.481113 | orchestrator | Wednesday 04 February 2026 02:25:55 +0000 (0:00:06.177) 0:02:05.953 **** 2026-02-04 02:26:07.481123 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:26:07.481131 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:26:07.481139 | orchestrator | 2026-02-04 02:26:07.481148 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-04 02:26:07.481157 | orchestrator | Wednesday 04 February 2026 02:26:01 +0000 (0:00:06.230) 0:02:12.184 **** 2026-02-04 02:26:07.481165 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:26:07.481173 | orchestrator | 2026-02-04 02:26:07.481181 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-04 02:26:07.481190 | orchestrator | Wednesday 04 February 2026 02:26:01 +0000 (0:00:00.134) 0:02:12.319 **** 2026-02-04 02:26:07.481198 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:26:07.481209 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:26:07.481218 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:26:07.481226 | orchestrator | 2026-02-04 02:26:07.481235 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-04 02:26:07.481245 | orchestrator | Wednesday 04 February 2026 02:26:02 +0000 (0:00:01.019) 0:02:13.338 **** 2026-02-04 02:26:07.481253 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:26:07.481261 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:26:07.481269 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:26:07.481278 | orchestrator | 2026-02-04 02:26:07.481285 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-04 02:26:07.481294 | orchestrator | Wednesday 04 February 2026 02:26:03 +0000 (0:00:00.634) 0:02:13.972 **** 2026-02-04 02:26:07.481303 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:26:07.481335 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:26:07.481345 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:26:07.481354 | orchestrator | 2026-02-04 02:26:07.481363 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-04 02:26:07.481371 | orchestrator | Wednesday 04 February 2026 02:26:04 +0000 (0:00:00.808) 0:02:14.781 **** 2026-02-04 02:26:07.481380 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:26:07.481388 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:26:07.481396 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:26:07.481405 | orchestrator | 2026-02-04 02:26:07.481414 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-04 02:26:07.481422 | orchestrator | Wednesday 04 February 2026 02:26:05 +0000 (0:00:00.629) 0:02:15.410 **** 2026-02-04 02:26:07.481431 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:26:07.481440 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:26:07.481449 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:26:07.481457 | orchestrator | 2026-02-04 02:26:07.481468 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-04 02:26:07.481477 | orchestrator | Wednesday 04 February 2026 02:26:06 +0000 (0:00:01.162) 0:02:16.573 **** 2026-02-04 02:26:07.481487 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:26:07.481496 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:26:07.481504 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:26:07.481514 | orchestrator | 2026-02-04 02:26:07.481523 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:26:07.481534 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-04 02:26:07.481545 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-04 02:26:07.481554 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-04 02:26:07.481564 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:26:07.481583 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:26:07.481592 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:26:07.481600 | orchestrator | 2026-02-04 02:26:07.481608 | orchestrator | 2026-02-04 02:26:07.481629 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:26:07.481638 | orchestrator | Wednesday 04 February 2026 02:26:07 +0000 (0:00:00.871) 0:02:17.444 **** 2026-02-04 02:26:07.481646 | orchestrator | =============================================================================== 2026-02-04 02:26:07.481654 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 34.43s 2026-02-04 02:26:07.481663 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.64s 2026-02-04 02:26:07.481671 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.48s 2026-02-04 02:26:07.481679 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.67s 2026-02-04 02:26:07.481687 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.40s 2026-02-04 02:26:07.481716 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.46s 2026-02-04 02:26:07.481726 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.45s 2026-02-04 02:26:07.481734 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.69s 2026-02-04 02:26:07.481743 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.38s 2026-02-04 02:26:07.481752 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 1.78s 2026-02-04 02:26:07.481760 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.54s 2026-02-04 02:26:07.481768 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.52s 2026-02-04 02:26:07.481775 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.51s 2026-02-04 02:26:07.481783 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.36s 2026-02-04 02:26:07.481790 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.33s 2026-02-04 02:26:07.481798 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.28s 2026-02-04 02:26:07.481808 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.23s 2026-02-04 02:26:07.481817 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.20s 2026-02-04 02:26:07.481827 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.16s 2026-02-04 02:26:07.481836 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.14s 2026-02-04 02:26:07.792410 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 02:26:07.792502 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-04 02:26:09.949030 | orchestrator | 2026-02-04 02:26:09 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-04 02:26:20.101533 | orchestrator | 2026-02-04 02:26:20 | INFO  | Task 812f9953-24fe-4fb3-a342-fcea3a478fac (wipe-partitions) was prepared for execution. 2026-02-04 02:26:20.101629 | orchestrator | 2026-02-04 02:26:20 | INFO  | It takes a moment until task 812f9953-24fe-4fb3-a342-fcea3a478fac (wipe-partitions) has been started and output is visible here. 2026-02-04 02:26:32.827725 | orchestrator | 2026-02-04 02:26:32.827805 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-04 02:26:32.827812 | orchestrator | 2026-02-04 02:26:32.827817 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-04 02:26:32.827822 | orchestrator | Wednesday 04 February 2026 02:26:24 +0000 (0:00:00.132) 0:00:00.132 **** 2026-02-04 02:26:32.827842 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:26:32.827847 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:26:32.827851 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:26:32.827855 | orchestrator | 2026-02-04 02:26:32.827859 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-04 02:26:32.827863 | orchestrator | Wednesday 04 February 2026 02:26:24 +0000 (0:00:00.585) 0:00:00.717 **** 2026-02-04 02:26:32.827867 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:26:32.827871 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:26:32.827875 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:26:32.827879 | orchestrator | 2026-02-04 02:26:32.827883 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-04 02:26:32.827887 | orchestrator | Wednesday 04 February 2026 02:26:25 +0000 (0:00:00.381) 0:00:01.098 **** 2026-02-04 02:26:32.827891 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:26:32.827896 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:26:32.827899 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:26:32.827903 | orchestrator | 2026-02-04 02:26:32.827907 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-04 02:26:32.827911 | orchestrator | Wednesday 04 February 2026 02:26:25 +0000 (0:00:00.594) 0:00:01.693 **** 2026-02-04 02:26:32.827915 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:26:32.827919 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:26:32.827923 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:26:32.827927 | orchestrator | 2026-02-04 02:26:32.827931 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-04 02:26:32.827935 | orchestrator | Wednesday 04 February 2026 02:26:26 +0000 (0:00:00.300) 0:00:01.993 **** 2026-02-04 02:26:32.827939 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 02:26:32.827943 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 02:26:32.827947 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 02:26:32.827951 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 02:26:32.827955 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 02:26:32.827959 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 02:26:32.827974 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 02:26:32.827978 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 02:26:32.827982 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 02:26:32.827986 | orchestrator | 2026-02-04 02:26:32.827989 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-04 02:26:32.827993 | orchestrator | Wednesday 04 February 2026 02:26:27 +0000 (0:00:01.212) 0:00:03.206 **** 2026-02-04 02:26:32.827997 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 02:26:32.828001 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 02:26:32.828005 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 02:26:32.828009 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 02:26:32.828013 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 02:26:32.828017 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 02:26:32.828020 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 02:26:32.828024 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 02:26:32.828028 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 02:26:32.828032 | orchestrator | 2026-02-04 02:26:32.828035 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-04 02:26:32.828039 | orchestrator | Wednesday 04 February 2026 02:26:29 +0000 (0:00:01.569) 0:00:04.776 **** 2026-02-04 02:26:32.828043 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-04 02:26:32.828047 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-04 02:26:32.828050 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-04 02:26:32.828054 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-04 02:26:32.828062 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-04 02:26:32.828066 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-04 02:26:32.828070 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-04 02:26:32.828074 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-04 02:26:32.828077 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-04 02:26:32.828081 | orchestrator | 2026-02-04 02:26:32.828085 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-04 02:26:32.828089 | orchestrator | Wednesday 04 February 2026 02:26:31 +0000 (0:00:02.088) 0:00:06.864 **** 2026-02-04 02:26:32.828093 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:26:32.828096 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:26:32.828100 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:26:32.828104 | orchestrator | 2026-02-04 02:26:32.828108 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-04 02:26:32.828112 | orchestrator | Wednesday 04 February 2026 02:26:31 +0000 (0:00:00.612) 0:00:07.477 **** 2026-02-04 02:26:32.828116 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:26:32.828119 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:26:32.828123 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:26:32.828140 | orchestrator | 2026-02-04 02:26:32.828144 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:26:32.828154 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:32.828160 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:32.828174 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:32.828178 | orchestrator | 2026-02-04 02:26:32.828182 | orchestrator | 2026-02-04 02:26:32.828186 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:26:32.828190 | orchestrator | Wednesday 04 February 2026 02:26:32 +0000 (0:00:00.667) 0:00:08.144 **** 2026-02-04 02:26:32.828194 | orchestrator | =============================================================================== 2026-02-04 02:26:32.828198 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.09s 2026-02-04 02:26:32.828202 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.57s 2026-02-04 02:26:32.828205 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2026-02-04 02:26:32.828209 | orchestrator | Request device events from the kernel ----------------------------------- 0.67s 2026-02-04 02:26:32.828213 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-02-04 02:26:32.828217 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.59s 2026-02-04 02:26:32.828221 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2026-02-04 02:26:32.828224 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-02-04 02:26:32.828228 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2026-02-04 02:26:45.258220 | orchestrator | 2026-02-04 02:26:45 | INFO  | Task 5495ded8-5758-44d9-b6f5-4048f9698b97 (facts) was prepared for execution. 2026-02-04 02:26:45.258449 | orchestrator | 2026-02-04 02:26:45 | INFO  | It takes a moment until task 5495ded8-5758-44d9-b6f5-4048f9698b97 (facts) has been started and output is visible here. 2026-02-04 02:26:58.089026 | orchestrator | 2026-02-04 02:26:58.089128 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-04 02:26:58.089140 | orchestrator | 2026-02-04 02:26:58.089148 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 02:26:58.089156 | orchestrator | Wednesday 04 February 2026 02:26:49 +0000 (0:00:00.274) 0:00:00.274 **** 2026-02-04 02:26:58.089187 | orchestrator | ok: [testbed-manager] 2026-02-04 02:26:58.089195 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:26:58.089201 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:26:58.089207 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:26:58.089214 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:26:58.089220 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:26:58.089227 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:26:58.089234 | orchestrator | 2026-02-04 02:26:58.089240 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 02:26:58.089248 | orchestrator | Wednesday 04 February 2026 02:26:50 +0000 (0:00:01.118) 0:00:01.392 **** 2026-02-04 02:26:58.089255 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:26:58.089263 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:26:58.089269 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:26:58.089276 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:26:58.089282 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:26:58.089289 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:26:58.089295 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:26:58.089301 | orchestrator | 2026-02-04 02:26:58.089308 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 02:26:58.089314 | orchestrator | 2026-02-04 02:26:58.089320 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 02:26:58.089326 | orchestrator | Wednesday 04 February 2026 02:26:51 +0000 (0:00:01.289) 0:00:02.682 **** 2026-02-04 02:26:58.089332 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:26:58.089361 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:26:58.089368 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:26:58.089374 | orchestrator | ok: [testbed-manager] 2026-02-04 02:26:58.089380 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:26:58.089386 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:26:58.089392 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:26:58.089398 | orchestrator | 2026-02-04 02:26:58.089405 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 02:26:58.089411 | orchestrator | 2026-02-04 02:26:58.089418 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 02:26:58.089424 | orchestrator | Wednesday 04 February 2026 02:26:57 +0000 (0:00:05.134) 0:00:07.816 **** 2026-02-04 02:26:58.089431 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:26:58.089438 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:26:58.089444 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:26:58.089450 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:26:58.089456 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:26:58.089462 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:26:58.089468 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:26:58.089475 | orchestrator | 2026-02-04 02:26:58.089481 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:26:58.089488 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:58.089576 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:58.089593 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:58.089599 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:58.089605 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:58.089611 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:58.089627 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:26:58.089633 | orchestrator | 2026-02-04 02:26:58.089640 | orchestrator | 2026-02-04 02:26:58.089646 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:26:58.089652 | orchestrator | Wednesday 04 February 2026 02:26:57 +0000 (0:00:00.557) 0:00:08.374 **** 2026-02-04 02:26:58.089660 | orchestrator | =============================================================================== 2026-02-04 02:26:58.089667 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.13s 2026-02-04 02:26:58.089674 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.29s 2026-02-04 02:26:58.089682 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-02-04 02:26:58.089689 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-04 02:27:00.585647 | orchestrator | 2026-02-04 02:27:00 | INFO  | Task 739558d5-b373-47f0-83f4-e703eb81b998 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-04 02:27:00.585729 | orchestrator | 2026-02-04 02:27:00 | INFO  | It takes a moment until task 739558d5-b373-47f0-83f4-e703eb81b998 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-04 02:27:12.739680 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 02:27:12.739783 | orchestrator | 2.16.14 2026-02-04 02:27:12.739795 | orchestrator | 2026-02-04 02:27:12.739804 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 02:27:12.739814 | orchestrator | 2026-02-04 02:27:12.739822 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 02:27:12.739830 | orchestrator | Wednesday 04 February 2026 02:27:05 +0000 (0:00:00.335) 0:00:00.335 **** 2026-02-04 02:27:12.739839 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 02:27:12.739847 | orchestrator | 2026-02-04 02:27:12.739869 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 02:27:12.739877 | orchestrator | Wednesday 04 February 2026 02:27:05 +0000 (0:00:00.264) 0:00:00.600 **** 2026-02-04 02:27:12.739885 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:27:12.739892 | orchestrator | 2026-02-04 02:27:12.739898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.739905 | orchestrator | Wednesday 04 February 2026 02:27:05 +0000 (0:00:00.236) 0:00:00.837 **** 2026-02-04 02:27:12.739912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-04 02:27:12.739919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-04 02:27:12.739926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-04 02:27:12.739933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-04 02:27:12.739940 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-04 02:27:12.739947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-04 02:27:12.739954 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-04 02:27:12.739961 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-04 02:27:12.739968 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-04 02:27:12.739975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-04 02:27:12.739982 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-04 02:27:12.739989 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-04 02:27:12.740016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-04 02:27:12.740024 | orchestrator | 2026-02-04 02:27:12.740030 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740038 | orchestrator | Wednesday 04 February 2026 02:27:06 +0000 (0:00:00.492) 0:00:01.329 **** 2026-02-04 02:27:12.740045 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740052 | orchestrator | 2026-02-04 02:27:12.740060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740066 | orchestrator | Wednesday 04 February 2026 02:27:06 +0000 (0:00:00.199) 0:00:01.529 **** 2026-02-04 02:27:12.740073 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740080 | orchestrator | 2026-02-04 02:27:12.740087 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740093 | orchestrator | Wednesday 04 February 2026 02:27:06 +0000 (0:00:00.211) 0:00:01.740 **** 2026-02-04 02:27:12.740100 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740107 | orchestrator | 2026-02-04 02:27:12.740115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740122 | orchestrator | Wednesday 04 February 2026 02:27:06 +0000 (0:00:00.205) 0:00:01.945 **** 2026-02-04 02:27:12.740129 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740136 | orchestrator | 2026-02-04 02:27:12.740143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740150 | orchestrator | Wednesday 04 February 2026 02:27:06 +0000 (0:00:00.211) 0:00:02.157 **** 2026-02-04 02:27:12.740156 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740163 | orchestrator | 2026-02-04 02:27:12.740170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740177 | orchestrator | Wednesday 04 February 2026 02:27:07 +0000 (0:00:00.222) 0:00:02.379 **** 2026-02-04 02:27:12.740184 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740191 | orchestrator | 2026-02-04 02:27:12.740199 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740206 | orchestrator | Wednesday 04 February 2026 02:27:07 +0000 (0:00:00.209) 0:00:02.589 **** 2026-02-04 02:27:12.740213 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740220 | orchestrator | 2026-02-04 02:27:12.740227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740234 | orchestrator | Wednesday 04 February 2026 02:27:07 +0000 (0:00:00.207) 0:00:02.796 **** 2026-02-04 02:27:12.740241 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740248 | orchestrator | 2026-02-04 02:27:12.740255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740263 | orchestrator | Wednesday 04 February 2026 02:27:07 +0000 (0:00:00.217) 0:00:03.014 **** 2026-02-04 02:27:12.740271 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861) 2026-02-04 02:27:12.740282 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861) 2026-02-04 02:27:12.740290 | orchestrator | 2026-02-04 02:27:12.740298 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740321 | orchestrator | Wednesday 04 February 2026 02:27:08 +0000 (0:00:00.430) 0:00:03.444 **** 2026-02-04 02:27:12.740328 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388) 2026-02-04 02:27:12.740336 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388) 2026-02-04 02:27:12.740343 | orchestrator | 2026-02-04 02:27:12.740369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740376 | orchestrator | Wednesday 04 February 2026 02:27:08 +0000 (0:00:00.647) 0:00:04.092 **** 2026-02-04 02:27:12.740389 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40) 2026-02-04 02:27:12.740452 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40) 2026-02-04 02:27:12.740461 | orchestrator | 2026-02-04 02:27:12.740468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740474 | orchestrator | Wednesday 04 February 2026 02:27:09 +0000 (0:00:00.648) 0:00:04.741 **** 2026-02-04 02:27:12.740482 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811) 2026-02-04 02:27:12.740489 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811) 2026-02-04 02:27:12.740496 | orchestrator | 2026-02-04 02:27:12.740503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:12.740509 | orchestrator | Wednesday 04 February 2026 02:27:10 +0000 (0:00:00.937) 0:00:05.679 **** 2026-02-04 02:27:12.740516 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 02:27:12.740523 | orchestrator | 2026-02-04 02:27:12.740530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:12.740537 | orchestrator | Wednesday 04 February 2026 02:27:10 +0000 (0:00:00.345) 0:00:06.024 **** 2026-02-04 02:27:12.740544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-04 02:27:12.740551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-04 02:27:12.740558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-04 02:27:12.740565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-04 02:27:12.740572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-04 02:27:12.740579 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-04 02:27:12.740586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-04 02:27:12.740593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-04 02:27:12.740600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-04 02:27:12.740606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-04 02:27:12.740613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-04 02:27:12.740620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-04 02:27:12.740627 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-04 02:27:12.740634 | orchestrator | 2026-02-04 02:27:12.740641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:12.740648 | orchestrator | Wednesday 04 February 2026 02:27:11 +0000 (0:00:00.425) 0:00:06.450 **** 2026-02-04 02:27:12.740655 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740662 | orchestrator | 2026-02-04 02:27:12.740669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:12.740676 | orchestrator | Wednesday 04 February 2026 02:27:11 +0000 (0:00:00.236) 0:00:06.686 **** 2026-02-04 02:27:12.740683 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740690 | orchestrator | 2026-02-04 02:27:12.740697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:12.740704 | orchestrator | Wednesday 04 February 2026 02:27:11 +0000 (0:00:00.210) 0:00:06.897 **** 2026-02-04 02:27:12.740710 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740716 | orchestrator | 2026-02-04 02:27:12.740722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:12.740728 | orchestrator | Wednesday 04 February 2026 02:27:11 +0000 (0:00:00.230) 0:00:07.127 **** 2026-02-04 02:27:12.740741 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740748 | orchestrator | 2026-02-04 02:27:12.740755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:12.740762 | orchestrator | Wednesday 04 February 2026 02:27:12 +0000 (0:00:00.213) 0:00:07.341 **** 2026-02-04 02:27:12.740769 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740776 | orchestrator | 2026-02-04 02:27:12.740783 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:12.740790 | orchestrator | Wednesday 04 February 2026 02:27:12 +0000 (0:00:00.235) 0:00:07.577 **** 2026-02-04 02:27:12.740797 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740804 | orchestrator | 2026-02-04 02:27:12.740811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:12.740819 | orchestrator | Wednesday 04 February 2026 02:27:12 +0000 (0:00:00.201) 0:00:07.779 **** 2026-02-04 02:27:12.740826 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:12.740833 | orchestrator | 2026-02-04 02:27:12.740848 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:20.861523 | orchestrator | Wednesday 04 February 2026 02:27:12 +0000 (0:00:00.200) 0:00:07.979 **** 2026-02-04 02:27:20.861612 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.861624 | orchestrator | 2026-02-04 02:27:20.861633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:20.861641 | orchestrator | Wednesday 04 February 2026 02:27:12 +0000 (0:00:00.219) 0:00:08.198 **** 2026-02-04 02:27:20.861648 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-04 02:27:20.861657 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-04 02:27:20.861665 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-04 02:27:20.861685 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-04 02:27:20.861693 | orchestrator | 2026-02-04 02:27:20.861700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:20.861708 | orchestrator | Wednesday 04 February 2026 02:27:14 +0000 (0:00:01.107) 0:00:09.306 **** 2026-02-04 02:27:20.861715 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.861722 | orchestrator | 2026-02-04 02:27:20.861730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:20.861737 | orchestrator | Wednesday 04 February 2026 02:27:14 +0000 (0:00:00.223) 0:00:09.529 **** 2026-02-04 02:27:20.861745 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.861752 | orchestrator | 2026-02-04 02:27:20.861759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:20.861767 | orchestrator | Wednesday 04 February 2026 02:27:14 +0000 (0:00:00.219) 0:00:09.749 **** 2026-02-04 02:27:20.861774 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.861781 | orchestrator | 2026-02-04 02:27:20.861788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:20.861796 | orchestrator | Wednesday 04 February 2026 02:27:14 +0000 (0:00:00.215) 0:00:09.964 **** 2026-02-04 02:27:20.861803 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.861810 | orchestrator | 2026-02-04 02:27:20.861817 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 02:27:20.861825 | orchestrator | Wednesday 04 February 2026 02:27:14 +0000 (0:00:00.213) 0:00:10.178 **** 2026-02-04 02:27:20.861832 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-04 02:27:20.861839 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-04 02:27:20.861847 | orchestrator | 2026-02-04 02:27:20.861854 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 02:27:20.861861 | orchestrator | Wednesday 04 February 2026 02:27:15 +0000 (0:00:00.192) 0:00:10.371 **** 2026-02-04 02:27:20.861868 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.861876 | orchestrator | 2026-02-04 02:27:20.861883 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 02:27:20.861890 | orchestrator | Wednesday 04 February 2026 02:27:15 +0000 (0:00:00.157) 0:00:10.528 **** 2026-02-04 02:27:20.861917 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.861925 | orchestrator | 2026-02-04 02:27:20.861933 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 02:27:20.861940 | orchestrator | Wednesday 04 February 2026 02:27:15 +0000 (0:00:00.155) 0:00:10.684 **** 2026-02-04 02:27:20.861947 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.861954 | orchestrator | 2026-02-04 02:27:20.861961 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 02:27:20.861969 | orchestrator | Wednesday 04 February 2026 02:27:15 +0000 (0:00:00.156) 0:00:10.840 **** 2026-02-04 02:27:20.861976 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:27:20.861984 | orchestrator | 2026-02-04 02:27:20.861991 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 02:27:20.861998 | orchestrator | Wednesday 04 February 2026 02:27:15 +0000 (0:00:00.151) 0:00:10.992 **** 2026-02-04 02:27:20.862006 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33635451-34dd-546b-bd98-6f515d7d790f'}}) 2026-02-04 02:27:20.862053 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f6bda8a0-a04e-51a6-8ac1-652b1721251e'}}) 2026-02-04 02:27:20.862062 | orchestrator | 2026-02-04 02:27:20.862071 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 02:27:20.862080 | orchestrator | Wednesday 04 February 2026 02:27:15 +0000 (0:00:00.198) 0:00:11.190 **** 2026-02-04 02:27:20.862090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33635451-34dd-546b-bd98-6f515d7d790f'}})  2026-02-04 02:27:20.862100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f6bda8a0-a04e-51a6-8ac1-652b1721251e'}})  2026-02-04 02:27:20.862109 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862119 | orchestrator | 2026-02-04 02:27:20.862128 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 02:27:20.862137 | orchestrator | Wednesday 04 February 2026 02:27:16 +0000 (0:00:00.391) 0:00:11.582 **** 2026-02-04 02:27:20.862146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33635451-34dd-546b-bd98-6f515d7d790f'}})  2026-02-04 02:27:20.862154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f6bda8a0-a04e-51a6-8ac1-652b1721251e'}})  2026-02-04 02:27:20.862162 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862170 | orchestrator | 2026-02-04 02:27:20.862178 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 02:27:20.862186 | orchestrator | Wednesday 04 February 2026 02:27:16 +0000 (0:00:00.168) 0:00:11.751 **** 2026-02-04 02:27:20.862193 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33635451-34dd-546b-bd98-6f515d7d790f'}})  2026-02-04 02:27:20.862216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f6bda8a0-a04e-51a6-8ac1-652b1721251e'}})  2026-02-04 02:27:20.862225 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862232 | orchestrator | 2026-02-04 02:27:20.862241 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 02:27:20.862249 | orchestrator | Wednesday 04 February 2026 02:27:16 +0000 (0:00:00.159) 0:00:11.911 **** 2026-02-04 02:27:20.862257 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:27:20.862265 | orchestrator | 2026-02-04 02:27:20.862273 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 02:27:20.862285 | orchestrator | Wednesday 04 February 2026 02:27:16 +0000 (0:00:00.161) 0:00:12.072 **** 2026-02-04 02:27:20.862293 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:27:20.862301 | orchestrator | 2026-02-04 02:27:20.862309 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 02:27:20.862325 | orchestrator | Wednesday 04 February 2026 02:27:16 +0000 (0:00:00.138) 0:00:12.210 **** 2026-02-04 02:27:20.862440 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862458 | orchestrator | 2026-02-04 02:27:20.862470 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 02:27:20.862482 | orchestrator | Wednesday 04 February 2026 02:27:17 +0000 (0:00:00.142) 0:00:12.353 **** 2026-02-04 02:27:20.862495 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862508 | orchestrator | 2026-02-04 02:27:20.862520 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 02:27:20.862533 | orchestrator | Wednesday 04 February 2026 02:27:17 +0000 (0:00:00.142) 0:00:12.495 **** 2026-02-04 02:27:20.862546 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862559 | orchestrator | 2026-02-04 02:27:20.862572 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 02:27:20.862585 | orchestrator | Wednesday 04 February 2026 02:27:17 +0000 (0:00:00.156) 0:00:12.652 **** 2026-02-04 02:27:20.862599 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 02:27:20.862612 | orchestrator |  "ceph_osd_devices": { 2026-02-04 02:27:20.862627 | orchestrator |  "sdb": { 2026-02-04 02:27:20.862641 | orchestrator |  "osd_lvm_uuid": "33635451-34dd-546b-bd98-6f515d7d790f" 2026-02-04 02:27:20.862656 | orchestrator |  }, 2026-02-04 02:27:20.862670 | orchestrator |  "sdc": { 2026-02-04 02:27:20.862685 | orchestrator |  "osd_lvm_uuid": "f6bda8a0-a04e-51a6-8ac1-652b1721251e" 2026-02-04 02:27:20.862700 | orchestrator |  } 2026-02-04 02:27:20.862714 | orchestrator |  } 2026-02-04 02:27:20.862729 | orchestrator | } 2026-02-04 02:27:20.862742 | orchestrator | 2026-02-04 02:27:20.862755 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 02:27:20.862768 | orchestrator | Wednesday 04 February 2026 02:27:17 +0000 (0:00:00.149) 0:00:12.801 **** 2026-02-04 02:27:20.862781 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862794 | orchestrator | 2026-02-04 02:27:20.862807 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 02:27:20.862821 | orchestrator | Wednesday 04 February 2026 02:27:17 +0000 (0:00:00.139) 0:00:12.941 **** 2026-02-04 02:27:20.862834 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862847 | orchestrator | 2026-02-04 02:27:20.862861 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 02:27:20.862875 | orchestrator | Wednesday 04 February 2026 02:27:17 +0000 (0:00:00.139) 0:00:13.081 **** 2026-02-04 02:27:20.862889 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:27:20.862902 | orchestrator | 2026-02-04 02:27:20.862915 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 02:27:20.862929 | orchestrator | Wednesday 04 February 2026 02:27:17 +0000 (0:00:00.147) 0:00:13.228 **** 2026-02-04 02:27:20.862942 | orchestrator | changed: [testbed-node-3] => { 2026-02-04 02:27:20.862956 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 02:27:20.862969 | orchestrator |  "ceph_osd_devices": { 2026-02-04 02:27:20.862982 | orchestrator |  "sdb": { 2026-02-04 02:27:20.862995 | orchestrator |  "osd_lvm_uuid": "33635451-34dd-546b-bd98-6f515d7d790f" 2026-02-04 02:27:20.863009 | orchestrator |  }, 2026-02-04 02:27:20.863023 | orchestrator |  "sdc": { 2026-02-04 02:27:20.863037 | orchestrator |  "osd_lvm_uuid": "f6bda8a0-a04e-51a6-8ac1-652b1721251e" 2026-02-04 02:27:20.863050 | orchestrator |  } 2026-02-04 02:27:20.863064 | orchestrator |  }, 2026-02-04 02:27:20.863076 | orchestrator |  "lvm_volumes": [ 2026-02-04 02:27:20.863090 | orchestrator |  { 2026-02-04 02:27:20.863103 | orchestrator |  "data": "osd-block-33635451-34dd-546b-bd98-6f515d7d790f", 2026-02-04 02:27:20.863117 | orchestrator |  "data_vg": "ceph-33635451-34dd-546b-bd98-6f515d7d790f" 2026-02-04 02:27:20.863130 | orchestrator |  }, 2026-02-04 02:27:20.863144 | orchestrator |  { 2026-02-04 02:27:20.863156 | orchestrator |  "data": "osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e", 2026-02-04 02:27:20.863179 | orchestrator |  "data_vg": "ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e" 2026-02-04 02:27:20.863192 | orchestrator |  } 2026-02-04 02:27:20.863205 | orchestrator |  ] 2026-02-04 02:27:20.863219 | orchestrator |  } 2026-02-04 02:27:20.863233 | orchestrator | } 2026-02-04 02:27:20.863246 | orchestrator | 2026-02-04 02:27:20.863260 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 02:27:20.863273 | orchestrator | Wednesday 04 February 2026 02:27:18 +0000 (0:00:00.454) 0:00:13.683 **** 2026-02-04 02:27:20.863286 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 02:27:20.863298 | orchestrator | 2026-02-04 02:27:20.863311 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 02:27:20.863325 | orchestrator | 2026-02-04 02:27:20.863338 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 02:27:20.863377 | orchestrator | Wednesday 04 February 2026 02:27:20 +0000 (0:00:01.887) 0:00:15.570 **** 2026-02-04 02:27:20.863391 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 02:27:20.863405 | orchestrator | 2026-02-04 02:27:20.863419 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 02:27:20.863433 | orchestrator | Wednesday 04 February 2026 02:27:20 +0000 (0:00:00.270) 0:00:15.841 **** 2026-02-04 02:27:20.863446 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:27:20.863459 | orchestrator | 2026-02-04 02:27:20.863484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346431 | orchestrator | Wednesday 04 February 2026 02:27:20 +0000 (0:00:00.267) 0:00:16.108 **** 2026-02-04 02:27:29.346527 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-04 02:27:29.346536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-04 02:27:29.346543 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-04 02:27:29.346565 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-04 02:27:29.346571 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-04 02:27:29.346577 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-04 02:27:29.346583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-04 02:27:29.346590 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-04 02:27:29.346596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-04 02:27:29.346603 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-04 02:27:29.346609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-04 02:27:29.346616 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-04 02:27:29.346622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-04 02:27:29.346629 | orchestrator | 2026-02-04 02:27:29.346636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346641 | orchestrator | Wednesday 04 February 2026 02:27:21 +0000 (0:00:00.431) 0:00:16.540 **** 2026-02-04 02:27:29.346648 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.346656 | orchestrator | 2026-02-04 02:27:29.346662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346668 | orchestrator | Wednesday 04 February 2026 02:27:21 +0000 (0:00:00.209) 0:00:16.749 **** 2026-02-04 02:27:29.346674 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.346681 | orchestrator | 2026-02-04 02:27:29.346688 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346694 | orchestrator | Wednesday 04 February 2026 02:27:21 +0000 (0:00:00.209) 0:00:16.958 **** 2026-02-04 02:27:29.346718 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.346724 | orchestrator | 2026-02-04 02:27:29.346730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346736 | orchestrator | Wednesday 04 February 2026 02:27:21 +0000 (0:00:00.202) 0:00:17.161 **** 2026-02-04 02:27:29.346742 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.346748 | orchestrator | 2026-02-04 02:27:29.346754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346828 | orchestrator | Wednesday 04 February 2026 02:27:22 +0000 (0:00:00.651) 0:00:17.813 **** 2026-02-04 02:27:29.346835 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.346840 | orchestrator | 2026-02-04 02:27:29.346846 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346852 | orchestrator | Wednesday 04 February 2026 02:27:22 +0000 (0:00:00.217) 0:00:18.030 **** 2026-02-04 02:27:29.346858 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.346864 | orchestrator | 2026-02-04 02:27:29.346869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346875 | orchestrator | Wednesday 04 February 2026 02:27:22 +0000 (0:00:00.218) 0:00:18.248 **** 2026-02-04 02:27:29.346881 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.346886 | orchestrator | 2026-02-04 02:27:29.346892 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346898 | orchestrator | Wednesday 04 February 2026 02:27:23 +0000 (0:00:00.225) 0:00:18.474 **** 2026-02-04 02:27:29.346903 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.346909 | orchestrator | 2026-02-04 02:27:29.346915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346921 | orchestrator | Wednesday 04 February 2026 02:27:23 +0000 (0:00:00.210) 0:00:18.684 **** 2026-02-04 02:27:29.346927 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f) 2026-02-04 02:27:29.346935 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f) 2026-02-04 02:27:29.346940 | orchestrator | 2026-02-04 02:27:29.346947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346953 | orchestrator | Wednesday 04 February 2026 02:27:23 +0000 (0:00:00.425) 0:00:19.110 **** 2026-02-04 02:27:29.346959 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536) 2026-02-04 02:27:29.346965 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536) 2026-02-04 02:27:29.346972 | orchestrator | 2026-02-04 02:27:29.346977 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.346983 | orchestrator | Wednesday 04 February 2026 02:27:24 +0000 (0:00:00.445) 0:00:19.555 **** 2026-02-04 02:27:29.346989 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd) 2026-02-04 02:27:29.346996 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd) 2026-02-04 02:27:29.347002 | orchestrator | 2026-02-04 02:27:29.347008 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.347031 | orchestrator | Wednesday 04 February 2026 02:27:24 +0000 (0:00:00.447) 0:00:20.003 **** 2026-02-04 02:27:29.347037 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23) 2026-02-04 02:27:29.347044 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23) 2026-02-04 02:27:29.347052 | orchestrator | 2026-02-04 02:27:29.347058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:29.347070 | orchestrator | Wednesday 04 February 2026 02:27:25 +0000 (0:00:00.458) 0:00:20.461 **** 2026-02-04 02:27:29.347077 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 02:27:29.347090 | orchestrator | 2026-02-04 02:27:29.347096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347102 | orchestrator | Wednesday 04 February 2026 02:27:25 +0000 (0:00:00.364) 0:00:20.826 **** 2026-02-04 02:27:29.347108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-04 02:27:29.347115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-04 02:27:29.347121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-04 02:27:29.347127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-04 02:27:29.347133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-04 02:27:29.347139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-04 02:27:29.347145 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-04 02:27:29.347151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-04 02:27:29.347157 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-04 02:27:29.347163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-04 02:27:29.347170 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-04 02:27:29.347176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-04 02:27:29.347181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-04 02:27:29.347188 | orchestrator | 2026-02-04 02:27:29.347194 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347200 | orchestrator | Wednesday 04 February 2026 02:27:25 +0000 (0:00:00.399) 0:00:21.226 **** 2026-02-04 02:27:29.347206 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.347212 | orchestrator | 2026-02-04 02:27:29.347218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347224 | orchestrator | Wednesday 04 February 2026 02:27:26 +0000 (0:00:00.664) 0:00:21.891 **** 2026-02-04 02:27:29.347230 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.347236 | orchestrator | 2026-02-04 02:27:29.347242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347248 | orchestrator | Wednesday 04 February 2026 02:27:26 +0000 (0:00:00.220) 0:00:22.111 **** 2026-02-04 02:27:29.347253 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.347260 | orchestrator | 2026-02-04 02:27:29.347266 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347272 | orchestrator | Wednesday 04 February 2026 02:27:27 +0000 (0:00:00.223) 0:00:22.335 **** 2026-02-04 02:27:29.347278 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.347284 | orchestrator | 2026-02-04 02:27:29.347290 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347296 | orchestrator | Wednesday 04 February 2026 02:27:27 +0000 (0:00:00.208) 0:00:22.543 **** 2026-02-04 02:27:29.347302 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.347309 | orchestrator | 2026-02-04 02:27:29.347315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347322 | orchestrator | Wednesday 04 February 2026 02:27:27 +0000 (0:00:00.239) 0:00:22.783 **** 2026-02-04 02:27:29.347328 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.347334 | orchestrator | 2026-02-04 02:27:29.347340 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347345 | orchestrator | Wednesday 04 February 2026 02:27:27 +0000 (0:00:00.227) 0:00:23.010 **** 2026-02-04 02:27:29.347351 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.347381 | orchestrator | 2026-02-04 02:27:29.347387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347393 | orchestrator | Wednesday 04 February 2026 02:27:27 +0000 (0:00:00.241) 0:00:23.252 **** 2026-02-04 02:27:29.347399 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:29.347405 | orchestrator | 2026-02-04 02:27:29.347411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347417 | orchestrator | Wednesday 04 February 2026 02:27:28 +0000 (0:00:00.209) 0:00:23.461 **** 2026-02-04 02:27:29.347423 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-04 02:27:29.347431 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-04 02:27:29.347437 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-04 02:27:29.347443 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-04 02:27:29.347448 | orchestrator | 2026-02-04 02:27:29.347454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:29.347461 | orchestrator | Wednesday 04 February 2026 02:27:29 +0000 (0:00:00.907) 0:00:24.369 **** 2026-02-04 02:27:29.347467 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115118 | orchestrator | 2026-02-04 02:27:36.115234 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:36.115247 | orchestrator | Wednesday 04 February 2026 02:27:29 +0000 (0:00:00.226) 0:00:24.595 **** 2026-02-04 02:27:36.115255 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115264 | orchestrator | 2026-02-04 02:27:36.115271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:36.115280 | orchestrator | Wednesday 04 February 2026 02:27:29 +0000 (0:00:00.201) 0:00:24.797 **** 2026-02-04 02:27:36.115301 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115308 | orchestrator | 2026-02-04 02:27:36.115316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:36.115323 | orchestrator | Wednesday 04 February 2026 02:27:30 +0000 (0:00:00.694) 0:00:25.492 **** 2026-02-04 02:27:36.115331 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115338 | orchestrator | 2026-02-04 02:27:36.115345 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 02:27:36.115353 | orchestrator | Wednesday 04 February 2026 02:27:30 +0000 (0:00:00.221) 0:00:25.713 **** 2026-02-04 02:27:36.115482 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-04 02:27:36.115492 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-04 02:27:36.115499 | orchestrator | 2026-02-04 02:27:36.115507 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 02:27:36.115514 | orchestrator | Wednesday 04 February 2026 02:27:30 +0000 (0:00:00.197) 0:00:25.911 **** 2026-02-04 02:27:36.115521 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115529 | orchestrator | 2026-02-04 02:27:36.115536 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 02:27:36.115544 | orchestrator | Wednesday 04 February 2026 02:27:30 +0000 (0:00:00.153) 0:00:26.065 **** 2026-02-04 02:27:36.115551 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115558 | orchestrator | 2026-02-04 02:27:36.115565 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 02:27:36.115573 | orchestrator | Wednesday 04 February 2026 02:27:30 +0000 (0:00:00.145) 0:00:26.210 **** 2026-02-04 02:27:36.115580 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115587 | orchestrator | 2026-02-04 02:27:36.115594 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 02:27:36.115601 | orchestrator | Wednesday 04 February 2026 02:27:31 +0000 (0:00:00.164) 0:00:26.375 **** 2026-02-04 02:27:36.115609 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:27:36.115623 | orchestrator | 2026-02-04 02:27:36.115634 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 02:27:36.115650 | orchestrator | Wednesday 04 February 2026 02:27:31 +0000 (0:00:00.141) 0:00:26.517 **** 2026-02-04 02:27:36.115689 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}}) 2026-02-04 02:27:36.115704 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a64378d-205e-5817-b815-b641dc764843'}}) 2026-02-04 02:27:36.115716 | orchestrator | 2026-02-04 02:27:36.115728 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 02:27:36.115741 | orchestrator | Wednesday 04 February 2026 02:27:31 +0000 (0:00:00.176) 0:00:26.693 **** 2026-02-04 02:27:36.115754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}})  2026-02-04 02:27:36.115767 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a64378d-205e-5817-b815-b641dc764843'}})  2026-02-04 02:27:36.115778 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115790 | orchestrator | 2026-02-04 02:27:36.115801 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 02:27:36.115813 | orchestrator | Wednesday 04 February 2026 02:27:31 +0000 (0:00:00.158) 0:00:26.852 **** 2026-02-04 02:27:36.115823 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}})  2026-02-04 02:27:36.115836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a64378d-205e-5817-b815-b641dc764843'}})  2026-02-04 02:27:36.115846 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115857 | orchestrator | 2026-02-04 02:27:36.115868 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 02:27:36.115879 | orchestrator | Wednesday 04 February 2026 02:27:31 +0000 (0:00:00.167) 0:00:27.020 **** 2026-02-04 02:27:36.115890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}})  2026-02-04 02:27:36.115901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a64378d-205e-5817-b815-b641dc764843'}})  2026-02-04 02:27:36.115912 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.115924 | orchestrator | 2026-02-04 02:27:36.115936 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 02:27:36.115947 | orchestrator | Wednesday 04 February 2026 02:27:31 +0000 (0:00:00.162) 0:00:27.182 **** 2026-02-04 02:27:36.115959 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:27:36.115970 | orchestrator | 2026-02-04 02:27:36.115981 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 02:27:36.115993 | orchestrator | Wednesday 04 February 2026 02:27:32 +0000 (0:00:00.171) 0:00:27.354 **** 2026-02-04 02:27:36.116005 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:27:36.116016 | orchestrator | 2026-02-04 02:27:36.116028 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 02:27:36.116041 | orchestrator | Wednesday 04 February 2026 02:27:32 +0000 (0:00:00.144) 0:00:27.498 **** 2026-02-04 02:27:36.116075 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.116088 | orchestrator | 2026-02-04 02:27:36.116100 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 02:27:36.116111 | orchestrator | Wednesday 04 February 2026 02:27:32 +0000 (0:00:00.366) 0:00:27.865 **** 2026-02-04 02:27:36.116124 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.116135 | orchestrator | 2026-02-04 02:27:36.116148 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 02:27:36.116156 | orchestrator | Wednesday 04 February 2026 02:27:32 +0000 (0:00:00.133) 0:00:27.999 **** 2026-02-04 02:27:36.116172 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.116179 | orchestrator | 2026-02-04 02:27:36.116186 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 02:27:36.116194 | orchestrator | Wednesday 04 February 2026 02:27:32 +0000 (0:00:00.171) 0:00:28.171 **** 2026-02-04 02:27:36.116212 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 02:27:36.116219 | orchestrator |  "ceph_osd_devices": { 2026-02-04 02:27:36.116227 | orchestrator |  "sdb": { 2026-02-04 02:27:36.116235 | orchestrator |  "osd_lvm_uuid": "f48ca6a8-b497-5c65-8a3b-569ec358ef4c" 2026-02-04 02:27:36.116243 | orchestrator |  }, 2026-02-04 02:27:36.116250 | orchestrator |  "sdc": { 2026-02-04 02:27:36.116258 | orchestrator |  "osd_lvm_uuid": "8a64378d-205e-5817-b815-b641dc764843" 2026-02-04 02:27:36.116265 | orchestrator |  } 2026-02-04 02:27:36.116272 | orchestrator |  } 2026-02-04 02:27:36.116280 | orchestrator | } 2026-02-04 02:27:36.116287 | orchestrator | 2026-02-04 02:27:36.116294 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 02:27:36.116302 | orchestrator | Wednesday 04 February 2026 02:27:33 +0000 (0:00:00.184) 0:00:28.355 **** 2026-02-04 02:27:36.116309 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.116316 | orchestrator | 2026-02-04 02:27:36.116323 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 02:27:36.116330 | orchestrator | Wednesday 04 February 2026 02:27:33 +0000 (0:00:00.155) 0:00:28.511 **** 2026-02-04 02:27:36.116338 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.116345 | orchestrator | 2026-02-04 02:27:36.116352 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 02:27:36.116379 | orchestrator | Wednesday 04 February 2026 02:27:33 +0000 (0:00:00.154) 0:00:28.666 **** 2026-02-04 02:27:36.116387 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:27:36.116394 | orchestrator | 2026-02-04 02:27:36.116401 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 02:27:36.116409 | orchestrator | Wednesday 04 February 2026 02:27:33 +0000 (0:00:00.143) 0:00:28.810 **** 2026-02-04 02:27:36.116416 | orchestrator | changed: [testbed-node-4] => { 2026-02-04 02:27:36.116423 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 02:27:36.116431 | orchestrator |  "ceph_osd_devices": { 2026-02-04 02:27:36.116438 | orchestrator |  "sdb": { 2026-02-04 02:27:36.116445 | orchestrator |  "osd_lvm_uuid": "f48ca6a8-b497-5c65-8a3b-569ec358ef4c" 2026-02-04 02:27:36.116453 | orchestrator |  }, 2026-02-04 02:27:36.116460 | orchestrator |  "sdc": { 2026-02-04 02:27:36.116467 | orchestrator |  "osd_lvm_uuid": "8a64378d-205e-5817-b815-b641dc764843" 2026-02-04 02:27:36.116474 | orchestrator |  } 2026-02-04 02:27:36.116482 | orchestrator |  }, 2026-02-04 02:27:36.116489 | orchestrator |  "lvm_volumes": [ 2026-02-04 02:27:36.116496 | orchestrator |  { 2026-02-04 02:27:36.116504 | orchestrator |  "data": "osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c", 2026-02-04 02:27:36.116511 | orchestrator |  "data_vg": "ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c" 2026-02-04 02:27:36.116519 | orchestrator |  }, 2026-02-04 02:27:36.116526 | orchestrator |  { 2026-02-04 02:27:36.116533 | orchestrator |  "data": "osd-block-8a64378d-205e-5817-b815-b641dc764843", 2026-02-04 02:27:36.116540 | orchestrator |  "data_vg": "ceph-8a64378d-205e-5817-b815-b641dc764843" 2026-02-04 02:27:36.116548 | orchestrator |  } 2026-02-04 02:27:36.116555 | orchestrator |  ] 2026-02-04 02:27:36.116563 | orchestrator |  } 2026-02-04 02:27:36.116570 | orchestrator | } 2026-02-04 02:27:36.116577 | orchestrator | 2026-02-04 02:27:36.116585 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 02:27:36.116592 | orchestrator | Wednesday 04 February 2026 02:27:33 +0000 (0:00:00.232) 0:00:29.042 **** 2026-02-04 02:27:36.116599 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 02:27:36.116606 | orchestrator | 2026-02-04 02:27:36.116613 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-04 02:27:36.116621 | orchestrator | 2026-02-04 02:27:36.116628 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 02:27:36.116635 | orchestrator | Wednesday 04 February 2026 02:27:35 +0000 (0:00:01.369) 0:00:30.412 **** 2026-02-04 02:27:36.116648 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 02:27:36.116655 | orchestrator | 2026-02-04 02:27:36.116662 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 02:27:36.116669 | orchestrator | Wednesday 04 February 2026 02:27:35 +0000 (0:00:00.266) 0:00:30.679 **** 2026-02-04 02:27:36.116677 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:27:36.116684 | orchestrator | 2026-02-04 02:27:36.116691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:36.116698 | orchestrator | Wednesday 04 February 2026 02:27:35 +0000 (0:00:00.277) 0:00:30.957 **** 2026-02-04 02:27:36.116705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-04 02:27:36.116712 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-04 02:27:36.116720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-04 02:27:36.116727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-04 02:27:36.116734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-04 02:27:36.116748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-04 02:27:44.826829 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-04 02:27:44.826911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-04 02:27:44.826922 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-04 02:27:44.826930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-04 02:27:44.826952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-04 02:27:44.826960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-04 02:27:44.826967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-04 02:27:44.826975 | orchestrator | 2026-02-04 02:27:44.826983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.826992 | orchestrator | Wednesday 04 February 2026 02:27:36 +0000 (0:00:00.402) 0:00:31.359 **** 2026-02-04 02:27:44.826999 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827008 | orchestrator | 2026-02-04 02:27:44.827015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827022 | orchestrator | Wednesday 04 February 2026 02:27:36 +0000 (0:00:00.251) 0:00:31.611 **** 2026-02-04 02:27:44.827029 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827035 | orchestrator | 2026-02-04 02:27:44.827042 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827049 | orchestrator | Wednesday 04 February 2026 02:27:36 +0000 (0:00:00.227) 0:00:31.839 **** 2026-02-04 02:27:44.827056 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827062 | orchestrator | 2026-02-04 02:27:44.827069 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827077 | orchestrator | Wednesday 04 February 2026 02:27:36 +0000 (0:00:00.246) 0:00:32.085 **** 2026-02-04 02:27:44.827084 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827091 | orchestrator | 2026-02-04 02:27:44.827098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827106 | orchestrator | Wednesday 04 February 2026 02:27:37 +0000 (0:00:00.224) 0:00:32.309 **** 2026-02-04 02:27:44.827114 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827122 | orchestrator | 2026-02-04 02:27:44.827130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827137 | orchestrator | Wednesday 04 February 2026 02:27:37 +0000 (0:00:00.217) 0:00:32.527 **** 2026-02-04 02:27:44.827166 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827174 | orchestrator | 2026-02-04 02:27:44.827183 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827190 | orchestrator | Wednesday 04 February 2026 02:27:37 +0000 (0:00:00.210) 0:00:32.738 **** 2026-02-04 02:27:44.827198 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827204 | orchestrator | 2026-02-04 02:27:44.827212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827220 | orchestrator | Wednesday 04 February 2026 02:27:38 +0000 (0:00:00.665) 0:00:33.403 **** 2026-02-04 02:27:44.827227 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827234 | orchestrator | 2026-02-04 02:27:44.827241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827248 | orchestrator | Wednesday 04 February 2026 02:27:38 +0000 (0:00:00.208) 0:00:33.612 **** 2026-02-04 02:27:44.827255 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118) 2026-02-04 02:27:44.827263 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118) 2026-02-04 02:27:44.827270 | orchestrator | 2026-02-04 02:27:44.827277 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827283 | orchestrator | Wednesday 04 February 2026 02:27:38 +0000 (0:00:00.434) 0:00:34.047 **** 2026-02-04 02:27:44.827290 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675) 2026-02-04 02:27:44.827296 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675) 2026-02-04 02:27:44.827303 | orchestrator | 2026-02-04 02:27:44.827310 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827318 | orchestrator | Wednesday 04 February 2026 02:27:39 +0000 (0:00:00.465) 0:00:34.512 **** 2026-02-04 02:27:44.827325 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52) 2026-02-04 02:27:44.827333 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52) 2026-02-04 02:27:44.827339 | orchestrator | 2026-02-04 02:27:44.827346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827353 | orchestrator | Wednesday 04 February 2026 02:27:39 +0000 (0:00:00.454) 0:00:34.966 **** 2026-02-04 02:27:44.827361 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b) 2026-02-04 02:27:44.827429 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b) 2026-02-04 02:27:44.827437 | orchestrator | 2026-02-04 02:27:44.827444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:27:44.827451 | orchestrator | Wednesday 04 February 2026 02:27:40 +0000 (0:00:00.447) 0:00:35.414 **** 2026-02-04 02:27:44.827459 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 02:27:44.827466 | orchestrator | 2026-02-04 02:27:44.827473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827500 | orchestrator | Wednesday 04 February 2026 02:27:40 +0000 (0:00:00.358) 0:00:35.772 **** 2026-02-04 02:27:44.827508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-04 02:27:44.827515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-04 02:27:44.827522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-04 02:27:44.827538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-04 02:27:44.827545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-04 02:27:44.827552 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-04 02:27:44.827593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-04 02:27:44.827601 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-04 02:27:44.827609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-04 02:27:44.827617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-04 02:27:44.827624 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-04 02:27:44.827632 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-04 02:27:44.827640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-04 02:27:44.827647 | orchestrator | 2026-02-04 02:27:44.827655 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827663 | orchestrator | Wednesday 04 February 2026 02:27:40 +0000 (0:00:00.405) 0:00:36.177 **** 2026-02-04 02:27:44.827670 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827678 | orchestrator | 2026-02-04 02:27:44.827686 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827693 | orchestrator | Wednesday 04 February 2026 02:27:41 +0000 (0:00:00.224) 0:00:36.402 **** 2026-02-04 02:27:44.827701 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827709 | orchestrator | 2026-02-04 02:27:44.827718 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827726 | orchestrator | Wednesday 04 February 2026 02:27:41 +0000 (0:00:00.228) 0:00:36.631 **** 2026-02-04 02:27:44.827734 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827742 | orchestrator | 2026-02-04 02:27:44.827750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827758 | orchestrator | Wednesday 04 February 2026 02:27:42 +0000 (0:00:00.697) 0:00:37.329 **** 2026-02-04 02:27:44.827766 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827773 | orchestrator | 2026-02-04 02:27:44.827781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827789 | orchestrator | Wednesday 04 February 2026 02:27:42 +0000 (0:00:00.242) 0:00:37.571 **** 2026-02-04 02:27:44.827797 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827805 | orchestrator | 2026-02-04 02:27:44.827813 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827821 | orchestrator | Wednesday 04 February 2026 02:27:42 +0000 (0:00:00.231) 0:00:37.803 **** 2026-02-04 02:27:44.827828 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827837 | orchestrator | 2026-02-04 02:27:44.827844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827852 | orchestrator | Wednesday 04 February 2026 02:27:42 +0000 (0:00:00.227) 0:00:38.030 **** 2026-02-04 02:27:44.827860 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827868 | orchestrator | 2026-02-04 02:27:44.827876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827884 | orchestrator | Wednesday 04 February 2026 02:27:43 +0000 (0:00:00.235) 0:00:38.265 **** 2026-02-04 02:27:44.827891 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827899 | orchestrator | 2026-02-04 02:27:44.827907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827915 | orchestrator | Wednesday 04 February 2026 02:27:43 +0000 (0:00:00.231) 0:00:38.496 **** 2026-02-04 02:27:44.827923 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-04 02:27:44.827930 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-04 02:27:44.827937 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-04 02:27:44.827945 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-04 02:27:44.827953 | orchestrator | 2026-02-04 02:27:44.827970 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.827977 | orchestrator | Wednesday 04 February 2026 02:27:43 +0000 (0:00:00.697) 0:00:39.193 **** 2026-02-04 02:27:44.827984 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.827991 | orchestrator | 2026-02-04 02:27:44.827998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.828005 | orchestrator | Wednesday 04 February 2026 02:27:44 +0000 (0:00:00.215) 0:00:39.409 **** 2026-02-04 02:27:44.828012 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.828018 | orchestrator | 2026-02-04 02:27:44.828025 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.828033 | orchestrator | Wednesday 04 February 2026 02:27:44 +0000 (0:00:00.228) 0:00:39.637 **** 2026-02-04 02:27:44.828041 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.828048 | orchestrator | 2026-02-04 02:27:44.828056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:27:44.828064 | orchestrator | Wednesday 04 February 2026 02:27:44 +0000 (0:00:00.214) 0:00:39.851 **** 2026-02-04 02:27:44.828072 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:44.828080 | orchestrator | 2026-02-04 02:27:44.828098 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-04 02:27:49.459837 | orchestrator | Wednesday 04 February 2026 02:27:44 +0000 (0:00:00.222) 0:00:40.074 **** 2026-02-04 02:27:49.459954 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-04 02:27:49.459968 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-04 02:27:49.459976 | orchestrator | 2026-02-04 02:27:49.459985 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-04 02:27:49.460010 | orchestrator | Wednesday 04 February 2026 02:27:45 +0000 (0:00:00.420) 0:00:40.494 **** 2026-02-04 02:27:49.460018 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460026 | orchestrator | 2026-02-04 02:27:49.460034 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-04 02:27:49.460041 | orchestrator | Wednesday 04 February 2026 02:27:45 +0000 (0:00:00.154) 0:00:40.648 **** 2026-02-04 02:27:49.460048 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460056 | orchestrator | 2026-02-04 02:27:49.460063 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-04 02:27:49.460070 | orchestrator | Wednesday 04 February 2026 02:27:45 +0000 (0:00:00.155) 0:00:40.804 **** 2026-02-04 02:27:49.460078 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460085 | orchestrator | 2026-02-04 02:27:49.460092 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-04 02:27:49.460101 | orchestrator | Wednesday 04 February 2026 02:27:45 +0000 (0:00:00.168) 0:00:40.972 **** 2026-02-04 02:27:49.460115 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:27:49.460134 | orchestrator | 2026-02-04 02:27:49.460146 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-04 02:27:49.460159 | orchestrator | Wednesday 04 February 2026 02:27:45 +0000 (0:00:00.188) 0:00:41.161 **** 2026-02-04 02:27:49.460172 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}}) 2026-02-04 02:27:49.460185 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43734a2f-bb9f-5443-b704-3f4971f68639'}}) 2026-02-04 02:27:49.460197 | orchestrator | 2026-02-04 02:27:49.460216 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-04 02:27:49.460229 | orchestrator | Wednesday 04 February 2026 02:27:46 +0000 (0:00:00.175) 0:00:41.336 **** 2026-02-04 02:27:49.460241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}})  2026-02-04 02:27:49.460255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43734a2f-bb9f-5443-b704-3f4971f68639'}})  2026-02-04 02:27:49.460267 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460303 | orchestrator | 2026-02-04 02:27:49.460313 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-04 02:27:49.460320 | orchestrator | Wednesday 04 February 2026 02:27:46 +0000 (0:00:00.177) 0:00:41.514 **** 2026-02-04 02:27:49.460327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}})  2026-02-04 02:27:49.460334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43734a2f-bb9f-5443-b704-3f4971f68639'}})  2026-02-04 02:27:49.460342 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460349 | orchestrator | 2026-02-04 02:27:49.460356 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-04 02:27:49.460363 | orchestrator | Wednesday 04 February 2026 02:27:46 +0000 (0:00:00.172) 0:00:41.686 **** 2026-02-04 02:27:49.460402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}})  2026-02-04 02:27:49.460415 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43734a2f-bb9f-5443-b704-3f4971f68639'}})  2026-02-04 02:27:49.460424 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460432 | orchestrator | 2026-02-04 02:27:49.460441 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-04 02:27:49.460449 | orchestrator | Wednesday 04 February 2026 02:27:46 +0000 (0:00:00.180) 0:00:41.867 **** 2026-02-04 02:27:49.460458 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:27:49.460466 | orchestrator | 2026-02-04 02:27:49.460475 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-04 02:27:49.460483 | orchestrator | Wednesday 04 February 2026 02:27:46 +0000 (0:00:00.155) 0:00:42.022 **** 2026-02-04 02:27:49.460491 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:27:49.460499 | orchestrator | 2026-02-04 02:27:49.460508 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-04 02:27:49.460517 | orchestrator | Wednesday 04 February 2026 02:27:46 +0000 (0:00:00.157) 0:00:42.180 **** 2026-02-04 02:27:49.460526 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460535 | orchestrator | 2026-02-04 02:27:49.460543 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-04 02:27:49.460551 | orchestrator | Wednesday 04 February 2026 02:27:47 +0000 (0:00:00.377) 0:00:42.558 **** 2026-02-04 02:27:49.460561 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460573 | orchestrator | 2026-02-04 02:27:49.460586 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-04 02:27:49.460598 | orchestrator | Wednesday 04 February 2026 02:27:47 +0000 (0:00:00.146) 0:00:42.705 **** 2026-02-04 02:27:49.460610 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460623 | orchestrator | 2026-02-04 02:27:49.460635 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-04 02:27:49.460649 | orchestrator | Wednesday 04 February 2026 02:27:47 +0000 (0:00:00.136) 0:00:42.842 **** 2026-02-04 02:27:49.460662 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 02:27:49.460673 | orchestrator |  "ceph_osd_devices": { 2026-02-04 02:27:49.460681 | orchestrator |  "sdb": { 2026-02-04 02:27:49.460707 | orchestrator |  "osd_lvm_uuid": "7ab9afb0-5bc3-5f2a-af50-46dbad87a4af" 2026-02-04 02:27:49.460717 | orchestrator |  }, 2026-02-04 02:27:49.460726 | orchestrator |  "sdc": { 2026-02-04 02:27:49.460735 | orchestrator |  "osd_lvm_uuid": "43734a2f-bb9f-5443-b704-3f4971f68639" 2026-02-04 02:27:49.460743 | orchestrator |  } 2026-02-04 02:27:49.460750 | orchestrator |  } 2026-02-04 02:27:49.460757 | orchestrator | } 2026-02-04 02:27:49.460765 | orchestrator | 2026-02-04 02:27:49.460778 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-04 02:27:49.460786 | orchestrator | Wednesday 04 February 2026 02:27:47 +0000 (0:00:00.142) 0:00:42.984 **** 2026-02-04 02:27:49.460793 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460809 | orchestrator | 2026-02-04 02:27:49.460816 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-04 02:27:49.460823 | orchestrator | Wednesday 04 February 2026 02:27:47 +0000 (0:00:00.144) 0:00:43.128 **** 2026-02-04 02:27:49.460830 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460838 | orchestrator | 2026-02-04 02:27:49.460845 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-04 02:27:49.460852 | orchestrator | Wednesday 04 February 2026 02:27:48 +0000 (0:00:00.145) 0:00:43.274 **** 2026-02-04 02:27:49.460859 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:27:49.460867 | orchestrator | 2026-02-04 02:27:49.460874 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-04 02:27:49.460881 | orchestrator | Wednesday 04 February 2026 02:27:48 +0000 (0:00:00.134) 0:00:43.408 **** 2026-02-04 02:27:49.460888 | orchestrator | changed: [testbed-node-5] => { 2026-02-04 02:27:49.460895 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-04 02:27:49.460903 | orchestrator |  "ceph_osd_devices": { 2026-02-04 02:27:49.460910 | orchestrator |  "sdb": { 2026-02-04 02:27:49.460919 | orchestrator |  "osd_lvm_uuid": "7ab9afb0-5bc3-5f2a-af50-46dbad87a4af" 2026-02-04 02:27:49.460931 | orchestrator |  }, 2026-02-04 02:27:49.460943 | orchestrator |  "sdc": { 2026-02-04 02:27:49.460955 | orchestrator |  "osd_lvm_uuid": "43734a2f-bb9f-5443-b704-3f4971f68639" 2026-02-04 02:27:49.460968 | orchestrator |  } 2026-02-04 02:27:49.460979 | orchestrator |  }, 2026-02-04 02:27:49.460992 | orchestrator |  "lvm_volumes": [ 2026-02-04 02:27:49.461002 | orchestrator |  { 2026-02-04 02:27:49.461009 | orchestrator |  "data": "osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af", 2026-02-04 02:27:49.461017 | orchestrator |  "data_vg": "ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af" 2026-02-04 02:27:49.461024 | orchestrator |  }, 2026-02-04 02:27:49.461031 | orchestrator |  { 2026-02-04 02:27:49.461038 | orchestrator |  "data": "osd-block-43734a2f-bb9f-5443-b704-3f4971f68639", 2026-02-04 02:27:49.461045 | orchestrator |  "data_vg": "ceph-43734a2f-bb9f-5443-b704-3f4971f68639" 2026-02-04 02:27:49.461053 | orchestrator |  } 2026-02-04 02:27:49.461060 | orchestrator |  ] 2026-02-04 02:27:49.461068 | orchestrator |  } 2026-02-04 02:27:49.461075 | orchestrator | } 2026-02-04 02:27:49.461082 | orchestrator | 2026-02-04 02:27:49.461090 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-04 02:27:49.461097 | orchestrator | Wednesday 04 February 2026 02:27:48 +0000 (0:00:00.216) 0:00:43.625 **** 2026-02-04 02:27:49.461104 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 02:27:49.461111 | orchestrator | 2026-02-04 02:27:49.461118 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:27:49.461126 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 02:27:49.461135 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 02:27:49.461142 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 02:27:49.461149 | orchestrator | 2026-02-04 02:27:49.461156 | orchestrator | 2026-02-04 02:27:49.461164 | orchestrator | 2026-02-04 02:27:49.461171 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:27:49.461178 | orchestrator | Wednesday 04 February 2026 02:27:49 +0000 (0:00:01.061) 0:00:44.687 **** 2026-02-04 02:27:49.461185 | orchestrator | =============================================================================== 2026-02-04 02:27:49.461192 | orchestrator | Write configuration file ------------------------------------------------ 4.32s 2026-02-04 02:27:49.461206 | orchestrator | Add known links to the list of available block devices ------------------ 1.33s 2026-02-04 02:27:49.461214 | orchestrator | Add known partitions to the list of available block devices ------------- 1.23s 2026-02-04 02:27:49.461221 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2026-02-04 02:27:49.461229 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2026-02-04 02:27:49.461236 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2026-02-04 02:27:49.461243 | orchestrator | Print configuration data ------------------------------------------------ 0.90s 2026-02-04 02:27:49.461250 | orchestrator | Set DB devices config data ---------------------------------------------- 0.89s 2026-02-04 02:27:49.461257 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.81s 2026-02-04 02:27:49.461266 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.80s 2026-02-04 02:27:49.461278 | orchestrator | Get initial list of available block devices ----------------------------- 0.78s 2026-02-04 02:27:49.461290 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.73s 2026-02-04 02:27:49.461302 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-02-04 02:27:49.461322 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-02-04 02:27:49.886591 | orchestrator | Add known partitions to the list of available block devices ------------- 0.69s 2026-02-04 02:27:49.886682 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-02-04 02:27:49.886693 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-02-04 02:27:49.886716 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-04 02:27:49.886725 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-04 02:27:49.886732 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-04 02:28:12.426343 | orchestrator | 2026-02-04 02:28:12 | INFO  | Task 46629a69-b872-4293-8fb7-13794b69ad14 (sync inventory) is running in background. Output coming soon. 2026-02-04 02:28:39.746441 | orchestrator | 2026-02-04 02:28:13 | INFO  | Starting group_vars file reorganization 2026-02-04 02:28:39.746522 | orchestrator | 2026-02-04 02:28:13 | INFO  | Moved 0 file(s) to their respective directories 2026-02-04 02:28:39.746530 | orchestrator | 2026-02-04 02:28:13 | INFO  | Group_vars file reorganization completed 2026-02-04 02:28:39.746535 | orchestrator | 2026-02-04 02:28:16 | INFO  | Starting variable preparation from inventory 2026-02-04 02:28:39.746539 | orchestrator | 2026-02-04 02:28:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-04 02:28:39.746544 | orchestrator | 2026-02-04 02:28:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-04 02:28:39.746548 | orchestrator | 2026-02-04 02:28:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-04 02:28:39.746552 | orchestrator | 2026-02-04 02:28:19 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-04 02:28:39.746557 | orchestrator | 2026-02-04 02:28:19 | INFO  | Variable preparation completed 2026-02-04 02:28:39.746560 | orchestrator | 2026-02-04 02:28:20 | INFO  | Starting inventory overwrite handling 2026-02-04 02:28:39.746564 | orchestrator | 2026-02-04 02:28:20 | INFO  | Handling group overwrites in 99-overwrite 2026-02-04 02:28:39.746568 | orchestrator | 2026-02-04 02:28:20 | INFO  | Removing group frr:children from 60-generic 2026-02-04 02:28:39.746573 | orchestrator | 2026-02-04 02:28:20 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-04 02:28:39.746576 | orchestrator | 2026-02-04 02:28:20 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-04 02:28:39.746599 | orchestrator | 2026-02-04 02:28:20 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-04 02:28:39.746603 | orchestrator | 2026-02-04 02:28:20 | INFO  | Handling group overwrites in 20-roles 2026-02-04 02:28:39.746607 | orchestrator | 2026-02-04 02:28:20 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-04 02:28:39.746611 | orchestrator | 2026-02-04 02:28:20 | INFO  | Removed 5 group(s) in total 2026-02-04 02:28:39.746614 | orchestrator | 2026-02-04 02:28:20 | INFO  | Inventory overwrite handling completed 2026-02-04 02:28:39.746618 | orchestrator | 2026-02-04 02:28:21 | INFO  | Starting merge of inventory files 2026-02-04 02:28:39.746622 | orchestrator | 2026-02-04 02:28:21 | INFO  | Inventory files merged successfully 2026-02-04 02:28:39.746628 | orchestrator | 2026-02-04 02:28:27 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-04 02:28:39.746634 | orchestrator | 2026-02-04 02:28:38 | INFO  | Successfully wrote ClusterShell configuration 2026-02-04 02:28:39.746640 | orchestrator | [master fa125b7] 2026-02-04-02-28 2026-02-04 02:28:39.746647 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-04 02:28:42.109102 | orchestrator | 2026-02-04 02:28:42 | INFO  | Task d62bf58e-6ad4-4e68-8a86-346601c7c92e (ceph-create-lvm-devices) was prepared for execution. 2026-02-04 02:28:42.109191 | orchestrator | 2026-02-04 02:28:42 | INFO  | It takes a moment until task d62bf58e-6ad4-4e68-8a86-346601c7c92e (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-04 02:28:54.368864 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 02:28:54.368945 | orchestrator | 2.16.14 2026-02-04 02:28:54.368953 | orchestrator | 2026-02-04 02:28:54.368957 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 02:28:54.368963 | orchestrator | 2026-02-04 02:28:54.368967 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 02:28:54.368971 | orchestrator | Wednesday 04 February 2026 02:28:46 +0000 (0:00:00.313) 0:00:00.313 **** 2026-02-04 02:28:54.368976 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 02:28:54.368981 | orchestrator | 2026-02-04 02:28:54.368985 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 02:28:54.368989 | orchestrator | Wednesday 04 February 2026 02:28:46 +0000 (0:00:00.270) 0:00:00.583 **** 2026-02-04 02:28:54.368993 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:28:54.368997 | orchestrator | 2026-02-04 02:28:54.369001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369005 | orchestrator | Wednesday 04 February 2026 02:28:47 +0000 (0:00:00.280) 0:00:00.863 **** 2026-02-04 02:28:54.369009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-04 02:28:54.369013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-04 02:28:54.369027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-04 02:28:54.369031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-04 02:28:54.369035 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-04 02:28:54.369039 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-04 02:28:54.369043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-04 02:28:54.369047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-04 02:28:54.369051 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-04 02:28:54.369054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-04 02:28:54.369072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-04 02:28:54.369076 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-04 02:28:54.369080 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-04 02:28:54.369084 | orchestrator | 2026-02-04 02:28:54.369088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369092 | orchestrator | Wednesday 04 February 2026 02:28:47 +0000 (0:00:00.564) 0:00:01.428 **** 2026-02-04 02:28:54.369095 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369099 | orchestrator | 2026-02-04 02:28:54.369103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369107 | orchestrator | Wednesday 04 February 2026 02:28:47 +0000 (0:00:00.234) 0:00:01.663 **** 2026-02-04 02:28:54.369110 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369114 | orchestrator | 2026-02-04 02:28:54.369118 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369122 | orchestrator | Wednesday 04 February 2026 02:28:48 +0000 (0:00:00.201) 0:00:01.864 **** 2026-02-04 02:28:54.369125 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369129 | orchestrator | 2026-02-04 02:28:54.369133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369137 | orchestrator | Wednesday 04 February 2026 02:28:48 +0000 (0:00:00.203) 0:00:02.068 **** 2026-02-04 02:28:54.369140 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369144 | orchestrator | 2026-02-04 02:28:54.369148 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369152 | orchestrator | Wednesday 04 February 2026 02:28:48 +0000 (0:00:00.217) 0:00:02.286 **** 2026-02-04 02:28:54.369155 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369159 | orchestrator | 2026-02-04 02:28:54.369163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369167 | orchestrator | Wednesday 04 February 2026 02:28:48 +0000 (0:00:00.215) 0:00:02.501 **** 2026-02-04 02:28:54.369171 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369174 | orchestrator | 2026-02-04 02:28:54.369178 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369182 | orchestrator | Wednesday 04 February 2026 02:28:48 +0000 (0:00:00.200) 0:00:02.702 **** 2026-02-04 02:28:54.369186 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369189 | orchestrator | 2026-02-04 02:28:54.369193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369197 | orchestrator | Wednesday 04 February 2026 02:28:49 +0000 (0:00:00.244) 0:00:02.946 **** 2026-02-04 02:28:54.369201 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369204 | orchestrator | 2026-02-04 02:28:54.369208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369212 | orchestrator | Wednesday 04 February 2026 02:28:49 +0000 (0:00:00.221) 0:00:03.168 **** 2026-02-04 02:28:54.369216 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861) 2026-02-04 02:28:54.369221 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861) 2026-02-04 02:28:54.369225 | orchestrator | 2026-02-04 02:28:54.369229 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369248 | orchestrator | Wednesday 04 February 2026 02:28:49 +0000 (0:00:00.426) 0:00:03.594 **** 2026-02-04 02:28:54.369258 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388) 2026-02-04 02:28:54.369265 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388) 2026-02-04 02:28:54.369272 | orchestrator | 2026-02-04 02:28:54.369278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369291 | orchestrator | Wednesday 04 February 2026 02:28:50 +0000 (0:00:00.663) 0:00:04.257 **** 2026-02-04 02:28:54.369297 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40) 2026-02-04 02:28:54.369304 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40) 2026-02-04 02:28:54.369309 | orchestrator | 2026-02-04 02:28:54.369315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369320 | orchestrator | Wednesday 04 February 2026 02:28:51 +0000 (0:00:00.679) 0:00:04.937 **** 2026-02-04 02:28:54.369326 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811) 2026-02-04 02:28:54.369337 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811) 2026-02-04 02:28:54.369343 | orchestrator | 2026-02-04 02:28:54.369350 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:28:54.369356 | orchestrator | Wednesday 04 February 2026 02:28:52 +0000 (0:00:00.884) 0:00:05.821 **** 2026-02-04 02:28:54.369363 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 02:28:54.369369 | orchestrator | 2026-02-04 02:28:54.369375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:28:54.369381 | orchestrator | Wednesday 04 February 2026 02:28:52 +0000 (0:00:00.371) 0:00:06.193 **** 2026-02-04 02:28:54.369388 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-04 02:28:54.369394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-04 02:28:54.369474 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-04 02:28:54.369482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-04 02:28:54.369487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-04 02:28:54.369494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-04 02:28:54.369499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-04 02:28:54.369505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-04 02:28:54.369511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-04 02:28:54.369517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-04 02:28:54.369523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-04 02:28:54.369529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-04 02:28:54.369536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-04 02:28:54.369542 | orchestrator | 2026-02-04 02:28:54.369549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:28:54.369555 | orchestrator | Wednesday 04 February 2026 02:28:52 +0000 (0:00:00.430) 0:00:06.623 **** 2026-02-04 02:28:54.369561 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369567 | orchestrator | 2026-02-04 02:28:54.369574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:28:54.369580 | orchestrator | Wednesday 04 February 2026 02:28:53 +0000 (0:00:00.236) 0:00:06.860 **** 2026-02-04 02:28:54.369586 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369592 | orchestrator | 2026-02-04 02:28:54.369599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:28:54.369605 | orchestrator | Wednesday 04 February 2026 02:28:53 +0000 (0:00:00.235) 0:00:07.096 **** 2026-02-04 02:28:54.369612 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369625 | orchestrator | 2026-02-04 02:28:54.369632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:28:54.369639 | orchestrator | Wednesday 04 February 2026 02:28:53 +0000 (0:00:00.211) 0:00:07.307 **** 2026-02-04 02:28:54.369645 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369651 | orchestrator | 2026-02-04 02:28:54.369658 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:28:54.369665 | orchestrator | Wednesday 04 February 2026 02:28:53 +0000 (0:00:00.223) 0:00:07.530 **** 2026-02-04 02:28:54.369672 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369678 | orchestrator | 2026-02-04 02:28:54.369685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:28:54.369691 | orchestrator | Wednesday 04 February 2026 02:28:53 +0000 (0:00:00.225) 0:00:07.755 **** 2026-02-04 02:28:54.369698 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369704 | orchestrator | 2026-02-04 02:28:54.369710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:28:54.369716 | orchestrator | Wednesday 04 February 2026 02:28:54 +0000 (0:00:00.200) 0:00:07.956 **** 2026-02-04 02:28:54.369725 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:28:54.369731 | orchestrator | 2026-02-04 02:28:54.369745 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:02.762679 | orchestrator | Wednesday 04 February 2026 02:28:54 +0000 (0:00:00.205) 0:00:08.162 **** 2026-02-04 02:29:02.762800 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.762818 | orchestrator | 2026-02-04 02:29:02.762831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:02.762844 | orchestrator | Wednesday 04 February 2026 02:28:55 +0000 (0:00:00.644) 0:00:08.806 **** 2026-02-04 02:29:02.762855 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-04 02:29:02.762868 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-04 02:29:02.762879 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-04 02:29:02.762890 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-04 02:29:02.762901 | orchestrator | 2026-02-04 02:29:02.762912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:02.762924 | orchestrator | Wednesday 04 February 2026 02:28:55 +0000 (0:00:00.679) 0:00:09.485 **** 2026-02-04 02:29:02.762934 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.762945 | orchestrator | 2026-02-04 02:29:02.762956 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:02.762967 | orchestrator | Wednesday 04 February 2026 02:28:55 +0000 (0:00:00.239) 0:00:09.724 **** 2026-02-04 02:29:02.762978 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.762989 | orchestrator | 2026-02-04 02:29:02.763017 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:02.763028 | orchestrator | Wednesday 04 February 2026 02:28:56 +0000 (0:00:00.211) 0:00:09.935 **** 2026-02-04 02:29:02.763066 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763078 | orchestrator | 2026-02-04 02:29:02.763089 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:02.763100 | orchestrator | Wednesday 04 February 2026 02:28:56 +0000 (0:00:00.216) 0:00:10.152 **** 2026-02-04 02:29:02.763111 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763122 | orchestrator | 2026-02-04 02:29:02.763133 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 02:29:02.763144 | orchestrator | Wednesday 04 February 2026 02:28:56 +0000 (0:00:00.238) 0:00:10.390 **** 2026-02-04 02:29:02.763155 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763166 | orchestrator | 2026-02-04 02:29:02.763176 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 02:29:02.763187 | orchestrator | Wednesday 04 February 2026 02:28:56 +0000 (0:00:00.201) 0:00:10.592 **** 2026-02-04 02:29:02.763199 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '33635451-34dd-546b-bd98-6f515d7d790f'}}) 2026-02-04 02:29:02.763248 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f6bda8a0-a04e-51a6-8ac1-652b1721251e'}}) 2026-02-04 02:29:02.763261 | orchestrator | 2026-02-04 02:29:02.763274 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 02:29:02.763308 | orchestrator | Wednesday 04 February 2026 02:28:56 +0000 (0:00:00.209) 0:00:10.801 **** 2026-02-04 02:29:02.763322 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}) 2026-02-04 02:29:02.763336 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}) 2026-02-04 02:29:02.763349 | orchestrator | 2026-02-04 02:29:02.763361 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 02:29:02.763373 | orchestrator | Wednesday 04 February 2026 02:28:59 +0000 (0:00:02.011) 0:00:12.813 **** 2026-02-04 02:29:02.763385 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:02.763426 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:02.763442 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763455 | orchestrator | 2026-02-04 02:29:02.763468 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 02:29:02.763479 | orchestrator | Wednesday 04 February 2026 02:28:59 +0000 (0:00:00.161) 0:00:12.974 **** 2026-02-04 02:29:02.763490 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}) 2026-02-04 02:29:02.763502 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}) 2026-02-04 02:29:02.763512 | orchestrator | 2026-02-04 02:29:02.763523 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 02:29:02.763534 | orchestrator | Wednesday 04 February 2026 02:29:00 +0000 (0:00:01.499) 0:00:14.474 **** 2026-02-04 02:29:02.763545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:02.763556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:02.763567 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763578 | orchestrator | 2026-02-04 02:29:02.763589 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 02:29:02.763600 | orchestrator | Wednesday 04 February 2026 02:29:00 +0000 (0:00:00.151) 0:00:14.625 **** 2026-02-04 02:29:02.763628 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763640 | orchestrator | 2026-02-04 02:29:02.763651 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 02:29:02.763662 | orchestrator | Wednesday 04 February 2026 02:29:01 +0000 (0:00:00.375) 0:00:15.001 **** 2026-02-04 02:29:02.763673 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:02.763684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:02.763695 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763706 | orchestrator | 2026-02-04 02:29:02.763717 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 02:29:02.763728 | orchestrator | Wednesday 04 February 2026 02:29:01 +0000 (0:00:00.162) 0:00:15.164 **** 2026-02-04 02:29:02.763748 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763759 | orchestrator | 2026-02-04 02:29:02.763770 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 02:29:02.763781 | orchestrator | Wednesday 04 February 2026 02:29:01 +0000 (0:00:00.145) 0:00:15.309 **** 2026-02-04 02:29:02.763813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:02.763825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:02.763836 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763847 | orchestrator | 2026-02-04 02:29:02.763858 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 02:29:02.763869 | orchestrator | Wednesday 04 February 2026 02:29:01 +0000 (0:00:00.177) 0:00:15.487 **** 2026-02-04 02:29:02.763879 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763890 | orchestrator | 2026-02-04 02:29:02.763901 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 02:29:02.763912 | orchestrator | Wednesday 04 February 2026 02:29:01 +0000 (0:00:00.150) 0:00:15.637 **** 2026-02-04 02:29:02.763923 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:02.763934 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:02.763945 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.763956 | orchestrator | 2026-02-04 02:29:02.763966 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 02:29:02.763978 | orchestrator | Wednesday 04 February 2026 02:29:01 +0000 (0:00:00.164) 0:00:15.802 **** 2026-02-04 02:29:02.763989 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:29:02.764000 | orchestrator | 2026-02-04 02:29:02.764011 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 02:29:02.764022 | orchestrator | Wednesday 04 February 2026 02:29:02 +0000 (0:00:00.143) 0:00:15.945 **** 2026-02-04 02:29:02.764033 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:02.764044 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:02.764055 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.764065 | orchestrator | 2026-02-04 02:29:02.764076 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 02:29:02.764087 | orchestrator | Wednesday 04 February 2026 02:29:02 +0000 (0:00:00.153) 0:00:16.099 **** 2026-02-04 02:29:02.764098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:02.764109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:02.764120 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.764131 | orchestrator | 2026-02-04 02:29:02.764142 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 02:29:02.764152 | orchestrator | Wednesday 04 February 2026 02:29:02 +0000 (0:00:00.160) 0:00:16.260 **** 2026-02-04 02:29:02.764163 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:02.764174 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:02.764192 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.764203 | orchestrator | 2026-02-04 02:29:02.764214 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 02:29:02.764225 | orchestrator | Wednesday 04 February 2026 02:29:02 +0000 (0:00:00.155) 0:00:16.416 **** 2026-02-04 02:29:02.764236 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:02.764247 | orchestrator | 2026-02-04 02:29:02.764258 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 02:29:02.764275 | orchestrator | Wednesday 04 February 2026 02:29:02 +0000 (0:00:00.141) 0:00:16.557 **** 2026-02-04 02:29:09.524555 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.524686 | orchestrator | 2026-02-04 02:29:09.524706 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 02:29:09.524726 | orchestrator | Wednesday 04 February 2026 02:29:02 +0000 (0:00:00.142) 0:00:16.700 **** 2026-02-04 02:29:09.524738 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.524751 | orchestrator | 2026-02-04 02:29:09.524763 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 02:29:09.524775 | orchestrator | Wednesday 04 February 2026 02:29:03 +0000 (0:00:00.367) 0:00:17.068 **** 2026-02-04 02:29:09.524787 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 02:29:09.524800 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 02:29:09.524813 | orchestrator | } 2026-02-04 02:29:09.524826 | orchestrator | 2026-02-04 02:29:09.524838 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 02:29:09.524850 | orchestrator | Wednesday 04 February 2026 02:29:03 +0000 (0:00:00.156) 0:00:17.224 **** 2026-02-04 02:29:09.524864 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 02:29:09.524876 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 02:29:09.524888 | orchestrator | } 2026-02-04 02:29:09.524899 | orchestrator | 2026-02-04 02:29:09.524931 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 02:29:09.524956 | orchestrator | Wednesday 04 February 2026 02:29:03 +0000 (0:00:00.152) 0:00:17.377 **** 2026-02-04 02:29:09.524970 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 02:29:09.524982 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 02:29:09.524994 | orchestrator | } 2026-02-04 02:29:09.525007 | orchestrator | 2026-02-04 02:29:09.525019 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 02:29:09.525032 | orchestrator | Wednesday 04 February 2026 02:29:03 +0000 (0:00:00.153) 0:00:17.531 **** 2026-02-04 02:29:09.525045 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:29:09.525058 | orchestrator | 2026-02-04 02:29:09.525070 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 02:29:09.525082 | orchestrator | Wednesday 04 February 2026 02:29:04 +0000 (0:00:00.690) 0:00:18.222 **** 2026-02-04 02:29:09.525091 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:29:09.525100 | orchestrator | 2026-02-04 02:29:09.525109 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 02:29:09.525119 | orchestrator | Wednesday 04 February 2026 02:29:04 +0000 (0:00:00.511) 0:00:18.734 **** 2026-02-04 02:29:09.525127 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:29:09.525136 | orchestrator | 2026-02-04 02:29:09.525144 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 02:29:09.525157 | orchestrator | Wednesday 04 February 2026 02:29:05 +0000 (0:00:00.551) 0:00:19.285 **** 2026-02-04 02:29:09.525169 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:29:09.525182 | orchestrator | 2026-02-04 02:29:09.525194 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 02:29:09.525207 | orchestrator | Wednesday 04 February 2026 02:29:05 +0000 (0:00:00.167) 0:00:19.453 **** 2026-02-04 02:29:09.525219 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525231 | orchestrator | 2026-02-04 02:29:09.525243 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 02:29:09.525280 | orchestrator | Wednesday 04 February 2026 02:29:05 +0000 (0:00:00.126) 0:00:19.579 **** 2026-02-04 02:29:09.525294 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525307 | orchestrator | 2026-02-04 02:29:09.525320 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 02:29:09.525333 | orchestrator | Wednesday 04 February 2026 02:29:05 +0000 (0:00:00.109) 0:00:19.688 **** 2026-02-04 02:29:09.525344 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 02:29:09.525358 | orchestrator |  "vgs_report": { 2026-02-04 02:29:09.525372 | orchestrator |  "vg": [] 2026-02-04 02:29:09.525385 | orchestrator |  } 2026-02-04 02:29:09.525397 | orchestrator | } 2026-02-04 02:29:09.525441 | orchestrator | 2026-02-04 02:29:09.525454 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 02:29:09.525464 | orchestrator | Wednesday 04 February 2026 02:29:06 +0000 (0:00:00.147) 0:00:19.836 **** 2026-02-04 02:29:09.525475 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525487 | orchestrator | 2026-02-04 02:29:09.525499 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 02:29:09.525510 | orchestrator | Wednesday 04 February 2026 02:29:06 +0000 (0:00:00.134) 0:00:19.970 **** 2026-02-04 02:29:09.525523 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525535 | orchestrator | 2026-02-04 02:29:09.525547 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 02:29:09.525559 | orchestrator | Wednesday 04 February 2026 02:29:06 +0000 (0:00:00.369) 0:00:20.339 **** 2026-02-04 02:29:09.525571 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525583 | orchestrator | 2026-02-04 02:29:09.525596 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 02:29:09.525608 | orchestrator | Wednesday 04 February 2026 02:29:06 +0000 (0:00:00.148) 0:00:20.488 **** 2026-02-04 02:29:09.525620 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525633 | orchestrator | 2026-02-04 02:29:09.525640 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 02:29:09.525647 | orchestrator | Wednesday 04 February 2026 02:29:06 +0000 (0:00:00.148) 0:00:20.636 **** 2026-02-04 02:29:09.525655 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525662 | orchestrator | 2026-02-04 02:29:09.525669 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 02:29:09.525676 | orchestrator | Wednesday 04 February 2026 02:29:06 +0000 (0:00:00.142) 0:00:20.779 **** 2026-02-04 02:29:09.525683 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525690 | orchestrator | 2026-02-04 02:29:09.525697 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 02:29:09.525705 | orchestrator | Wednesday 04 February 2026 02:29:07 +0000 (0:00:00.153) 0:00:20.933 **** 2026-02-04 02:29:09.525712 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525719 | orchestrator | 2026-02-04 02:29:09.525726 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 02:29:09.525733 | orchestrator | Wednesday 04 February 2026 02:29:07 +0000 (0:00:00.147) 0:00:21.080 **** 2026-02-04 02:29:09.525758 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525766 | orchestrator | 2026-02-04 02:29:09.525773 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 02:29:09.525780 | orchestrator | Wednesday 04 February 2026 02:29:07 +0000 (0:00:00.155) 0:00:21.235 **** 2026-02-04 02:29:09.525787 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525795 | orchestrator | 2026-02-04 02:29:09.525802 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 02:29:09.525809 | orchestrator | Wednesday 04 February 2026 02:29:07 +0000 (0:00:00.145) 0:00:21.381 **** 2026-02-04 02:29:09.525816 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525823 | orchestrator | 2026-02-04 02:29:09.525831 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 02:29:09.525838 | orchestrator | Wednesday 04 February 2026 02:29:07 +0000 (0:00:00.137) 0:00:21.519 **** 2026-02-04 02:29:09.525854 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525861 | orchestrator | 2026-02-04 02:29:09.525868 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 02:29:09.525875 | orchestrator | Wednesday 04 February 2026 02:29:07 +0000 (0:00:00.143) 0:00:21.662 **** 2026-02-04 02:29:09.525882 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525890 | orchestrator | 2026-02-04 02:29:09.525903 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 02:29:09.525910 | orchestrator | Wednesday 04 February 2026 02:29:08 +0000 (0:00:00.166) 0:00:21.829 **** 2026-02-04 02:29:09.525918 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525925 | orchestrator | 2026-02-04 02:29:09.525932 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 02:29:09.525939 | orchestrator | Wednesday 04 February 2026 02:29:08 +0000 (0:00:00.155) 0:00:21.985 **** 2026-02-04 02:29:09.525946 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525953 | orchestrator | 2026-02-04 02:29:09.525960 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 02:29:09.525968 | orchestrator | Wednesday 04 February 2026 02:29:08 +0000 (0:00:00.363) 0:00:22.348 **** 2026-02-04 02:29:09.525976 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:09.525985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:09.525992 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.525999 | orchestrator | 2026-02-04 02:29:09.526006 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 02:29:09.526013 | orchestrator | Wednesday 04 February 2026 02:29:08 +0000 (0:00:00.155) 0:00:22.504 **** 2026-02-04 02:29:09.526077 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:09.526090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:09.526103 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.526116 | orchestrator | 2026-02-04 02:29:09.526128 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 02:29:09.526140 | orchestrator | Wednesday 04 February 2026 02:29:08 +0000 (0:00:00.168) 0:00:22.673 **** 2026-02-04 02:29:09.526153 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:09.526165 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:09.526178 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.526191 | orchestrator | 2026-02-04 02:29:09.526205 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 02:29:09.526217 | orchestrator | Wednesday 04 February 2026 02:29:09 +0000 (0:00:00.160) 0:00:22.834 **** 2026-02-04 02:29:09.526229 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:09.526242 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:09.526255 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.526269 | orchestrator | 2026-02-04 02:29:09.526283 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 02:29:09.526296 | orchestrator | Wednesday 04 February 2026 02:29:09 +0000 (0:00:00.165) 0:00:22.999 **** 2026-02-04 02:29:09.526319 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:09.526332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:09.526344 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:09.526356 | orchestrator | 2026-02-04 02:29:09.526369 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 02:29:09.526382 | orchestrator | Wednesday 04 February 2026 02:29:09 +0000 (0:00:00.161) 0:00:23.161 **** 2026-02-04 02:29:09.526467 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:14.931564 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:14.931647 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:14.931657 | orchestrator | 2026-02-04 02:29:14.931665 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 02:29:14.931673 | orchestrator | Wednesday 04 February 2026 02:29:09 +0000 (0:00:00.162) 0:00:23.323 **** 2026-02-04 02:29:14.931680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:14.931687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:14.931693 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:14.931700 | orchestrator | 2026-02-04 02:29:14.931718 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 02:29:14.931725 | orchestrator | Wednesday 04 February 2026 02:29:09 +0000 (0:00:00.164) 0:00:23.487 **** 2026-02-04 02:29:14.931732 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:14.931738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:14.931744 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:14.931751 | orchestrator | 2026-02-04 02:29:14.931757 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 02:29:14.931763 | orchestrator | Wednesday 04 February 2026 02:29:09 +0000 (0:00:00.170) 0:00:23.658 **** 2026-02-04 02:29:14.931770 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:29:14.931777 | orchestrator | 2026-02-04 02:29:14.931784 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 02:29:14.931790 | orchestrator | Wednesday 04 February 2026 02:29:10 +0000 (0:00:00.521) 0:00:24.180 **** 2026-02-04 02:29:14.931796 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:29:14.931803 | orchestrator | 2026-02-04 02:29:14.931809 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 02:29:14.931815 | orchestrator | Wednesday 04 February 2026 02:29:10 +0000 (0:00:00.512) 0:00:24.692 **** 2026-02-04 02:29:14.931822 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:29:14.931828 | orchestrator | 2026-02-04 02:29:14.931834 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 02:29:14.931841 | orchestrator | Wednesday 04 February 2026 02:29:11 +0000 (0:00:00.148) 0:00:24.841 **** 2026-02-04 02:29:14.931847 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'vg_name': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}) 2026-02-04 02:29:14.931855 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'vg_name': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}) 2026-02-04 02:29:14.931878 | orchestrator | 2026-02-04 02:29:14.931885 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 02:29:14.931891 | orchestrator | Wednesday 04 February 2026 02:29:11 +0000 (0:00:00.189) 0:00:25.030 **** 2026-02-04 02:29:14.931897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:14.931914 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:14.931921 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:14.931927 | orchestrator | 2026-02-04 02:29:14.931933 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 02:29:14.931940 | orchestrator | Wednesday 04 February 2026 02:29:11 +0000 (0:00:00.379) 0:00:25.409 **** 2026-02-04 02:29:14.931946 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:14.931952 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:14.931959 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:14.931965 | orchestrator | 2026-02-04 02:29:14.931971 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 02:29:14.931977 | orchestrator | Wednesday 04 February 2026 02:29:11 +0000 (0:00:00.163) 0:00:25.573 **** 2026-02-04 02:29:14.931984 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 02:29:14.931990 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 02:29:14.931996 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:29:14.932003 | orchestrator | 2026-02-04 02:29:14.932009 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 02:29:14.932015 | orchestrator | Wednesday 04 February 2026 02:29:11 +0000 (0:00:00.156) 0:00:25.729 **** 2026-02-04 02:29:14.932034 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 02:29:14.932041 | orchestrator |  "lvm_report": { 2026-02-04 02:29:14.932048 | orchestrator |  "lv": [ 2026-02-04 02:29:14.932054 | orchestrator |  { 2026-02-04 02:29:14.932061 | orchestrator |  "lv_name": "osd-block-33635451-34dd-546b-bd98-6f515d7d790f", 2026-02-04 02:29:14.932068 | orchestrator |  "vg_name": "ceph-33635451-34dd-546b-bd98-6f515d7d790f" 2026-02-04 02:29:14.932074 | orchestrator |  }, 2026-02-04 02:29:14.932081 | orchestrator |  { 2026-02-04 02:29:14.932087 | orchestrator |  "lv_name": "osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e", 2026-02-04 02:29:14.932093 | orchestrator |  "vg_name": "ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e" 2026-02-04 02:29:14.932100 | orchestrator |  } 2026-02-04 02:29:14.932106 | orchestrator |  ], 2026-02-04 02:29:14.932112 | orchestrator |  "pv": [ 2026-02-04 02:29:14.932120 | orchestrator |  { 2026-02-04 02:29:14.932127 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 02:29:14.932134 | orchestrator |  "vg_name": "ceph-33635451-34dd-546b-bd98-6f515d7d790f" 2026-02-04 02:29:14.932142 | orchestrator |  }, 2026-02-04 02:29:14.932149 | orchestrator |  { 2026-02-04 02:29:14.932164 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 02:29:14.932178 | orchestrator |  "vg_name": "ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e" 2026-02-04 02:29:14.932193 | orchestrator |  } 2026-02-04 02:29:14.932213 | orchestrator |  ] 2026-02-04 02:29:14.932224 | orchestrator |  } 2026-02-04 02:29:14.932234 | orchestrator | } 2026-02-04 02:29:14.932253 | orchestrator | 2026-02-04 02:29:14.932263 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 02:29:14.932274 | orchestrator | 2026-02-04 02:29:14.932284 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 02:29:14.932294 | orchestrator | Wednesday 04 February 2026 02:29:12 +0000 (0:00:00.319) 0:00:26.049 **** 2026-02-04 02:29:14.932305 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-04 02:29:14.932316 | orchestrator | 2026-02-04 02:29:14.932328 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 02:29:14.932338 | orchestrator | Wednesday 04 February 2026 02:29:12 +0000 (0:00:00.254) 0:00:26.303 **** 2026-02-04 02:29:14.932348 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:14.932359 | orchestrator | 2026-02-04 02:29:14.932366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:14.932372 | orchestrator | Wednesday 04 February 2026 02:29:12 +0000 (0:00:00.258) 0:00:26.562 **** 2026-02-04 02:29:14.932378 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-04 02:29:14.932384 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-04 02:29:14.932391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-04 02:29:14.932397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-04 02:29:14.932403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-04 02:29:14.932427 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-04 02:29:14.932434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-04 02:29:14.932440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-04 02:29:14.932446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-04 02:29:14.932453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-04 02:29:14.932459 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-04 02:29:14.932465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-04 02:29:14.932471 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-04 02:29:14.932477 | orchestrator | 2026-02-04 02:29:14.932483 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:14.932489 | orchestrator | Wednesday 04 February 2026 02:29:13 +0000 (0:00:00.436) 0:00:26.999 **** 2026-02-04 02:29:14.932495 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:14.932502 | orchestrator | 2026-02-04 02:29:14.932508 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:14.932514 | orchestrator | Wednesday 04 February 2026 02:29:13 +0000 (0:00:00.206) 0:00:27.205 **** 2026-02-04 02:29:14.932520 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:14.932526 | orchestrator | 2026-02-04 02:29:14.932532 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:14.932539 | orchestrator | Wednesday 04 February 2026 02:29:14 +0000 (0:00:00.653) 0:00:27.858 **** 2026-02-04 02:29:14.932545 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:14.932551 | orchestrator | 2026-02-04 02:29:14.932557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:14.932564 | orchestrator | Wednesday 04 February 2026 02:29:14 +0000 (0:00:00.213) 0:00:28.071 **** 2026-02-04 02:29:14.932570 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:14.932576 | orchestrator | 2026-02-04 02:29:14.932582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:14.932588 | orchestrator | Wednesday 04 February 2026 02:29:14 +0000 (0:00:00.221) 0:00:28.293 **** 2026-02-04 02:29:14.932600 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:14.932606 | orchestrator | 2026-02-04 02:29:14.932612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:14.932619 | orchestrator | Wednesday 04 February 2026 02:29:14 +0000 (0:00:00.216) 0:00:28.510 **** 2026-02-04 02:29:14.932630 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:14.932639 | orchestrator | 2026-02-04 02:29:14.932657 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:26.774996 | orchestrator | Wednesday 04 February 2026 02:29:14 +0000 (0:00:00.218) 0:00:28.728 **** 2026-02-04 02:29:26.775185 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.775217 | orchestrator | 2026-02-04 02:29:26.775238 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:26.775259 | orchestrator | Wednesday 04 February 2026 02:29:15 +0000 (0:00:00.208) 0:00:28.937 **** 2026-02-04 02:29:26.775280 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.775299 | orchestrator | 2026-02-04 02:29:26.775320 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:26.775339 | orchestrator | Wednesday 04 February 2026 02:29:15 +0000 (0:00:00.242) 0:00:29.179 **** 2026-02-04 02:29:26.775351 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f) 2026-02-04 02:29:26.775363 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f) 2026-02-04 02:29:26.775374 | orchestrator | 2026-02-04 02:29:26.775403 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:26.775486 | orchestrator | Wednesday 04 February 2026 02:29:15 +0000 (0:00:00.455) 0:00:29.635 **** 2026-02-04 02:29:26.775500 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536) 2026-02-04 02:29:26.775512 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536) 2026-02-04 02:29:26.775523 | orchestrator | 2026-02-04 02:29:26.775537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:26.775550 | orchestrator | Wednesday 04 February 2026 02:29:16 +0000 (0:00:00.435) 0:00:30.071 **** 2026-02-04 02:29:26.775563 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd) 2026-02-04 02:29:26.775577 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd) 2026-02-04 02:29:26.775589 | orchestrator | 2026-02-04 02:29:26.775602 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:26.775616 | orchestrator | Wednesday 04 February 2026 02:29:16 +0000 (0:00:00.451) 0:00:30.522 **** 2026-02-04 02:29:26.775629 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23) 2026-02-04 02:29:26.775640 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23) 2026-02-04 02:29:26.775651 | orchestrator | 2026-02-04 02:29:26.775662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:26.775673 | orchestrator | Wednesday 04 February 2026 02:29:17 +0000 (0:00:00.701) 0:00:31.224 **** 2026-02-04 02:29:26.775684 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 02:29:26.775694 | orchestrator | 2026-02-04 02:29:26.775705 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.775716 | orchestrator | Wednesday 04 February 2026 02:29:18 +0000 (0:00:00.604) 0:00:31.828 **** 2026-02-04 02:29:26.775727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-04 02:29:26.775739 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-04 02:29:26.775750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-04 02:29:26.775787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-04 02:29:26.775798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-04 02:29:26.775809 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-04 02:29:26.775819 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-04 02:29:26.775830 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-04 02:29:26.775841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-04 02:29:26.775852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-04 02:29:26.775862 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-04 02:29:26.775873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-04 02:29:26.775883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-04 02:29:26.775894 | orchestrator | 2026-02-04 02:29:26.775905 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.775916 | orchestrator | Wednesday 04 February 2026 02:29:18 +0000 (0:00:00.968) 0:00:32.797 **** 2026-02-04 02:29:26.775926 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.775937 | orchestrator | 2026-02-04 02:29:26.775948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.775959 | orchestrator | Wednesday 04 February 2026 02:29:19 +0000 (0:00:00.213) 0:00:33.011 **** 2026-02-04 02:29:26.775969 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.775980 | orchestrator | 2026-02-04 02:29:26.776000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776017 | orchestrator | Wednesday 04 February 2026 02:29:19 +0000 (0:00:00.216) 0:00:33.227 **** 2026-02-04 02:29:26.776042 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776064 | orchestrator | 2026-02-04 02:29:26.776110 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776130 | orchestrator | Wednesday 04 February 2026 02:29:19 +0000 (0:00:00.211) 0:00:33.439 **** 2026-02-04 02:29:26.776148 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776168 | orchestrator | 2026-02-04 02:29:26.776187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776206 | orchestrator | Wednesday 04 February 2026 02:29:19 +0000 (0:00:00.222) 0:00:33.662 **** 2026-02-04 02:29:26.776226 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776246 | orchestrator | 2026-02-04 02:29:26.776265 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776284 | orchestrator | Wednesday 04 February 2026 02:29:20 +0000 (0:00:00.231) 0:00:33.893 **** 2026-02-04 02:29:26.776296 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776307 | orchestrator | 2026-02-04 02:29:26.776318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776329 | orchestrator | Wednesday 04 February 2026 02:29:20 +0000 (0:00:00.236) 0:00:34.129 **** 2026-02-04 02:29:26.776350 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776361 | orchestrator | 2026-02-04 02:29:26.776372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776382 | orchestrator | Wednesday 04 February 2026 02:29:20 +0000 (0:00:00.209) 0:00:34.339 **** 2026-02-04 02:29:26.776393 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776404 | orchestrator | 2026-02-04 02:29:26.776456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776469 | orchestrator | Wednesday 04 February 2026 02:29:20 +0000 (0:00:00.216) 0:00:34.556 **** 2026-02-04 02:29:26.776479 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-04 02:29:26.776502 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-04 02:29:26.776513 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-04 02:29:26.776524 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-04 02:29:26.776535 | orchestrator | 2026-02-04 02:29:26.776546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776557 | orchestrator | Wednesday 04 February 2026 02:29:21 +0000 (0:00:00.978) 0:00:35.535 **** 2026-02-04 02:29:26.776567 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776578 | orchestrator | 2026-02-04 02:29:26.776589 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776600 | orchestrator | Wednesday 04 February 2026 02:29:22 +0000 (0:00:00.684) 0:00:36.219 **** 2026-02-04 02:29:26.776610 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776621 | orchestrator | 2026-02-04 02:29:26.776632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776643 | orchestrator | Wednesday 04 February 2026 02:29:22 +0000 (0:00:00.227) 0:00:36.446 **** 2026-02-04 02:29:26.776653 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776664 | orchestrator | 2026-02-04 02:29:26.776675 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:26.776686 | orchestrator | Wednesday 04 February 2026 02:29:22 +0000 (0:00:00.231) 0:00:36.677 **** 2026-02-04 02:29:26.776697 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776707 | orchestrator | 2026-02-04 02:29:26.776718 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 02:29:26.776729 | orchestrator | Wednesday 04 February 2026 02:29:23 +0000 (0:00:00.237) 0:00:36.915 **** 2026-02-04 02:29:26.776739 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776750 | orchestrator | 2026-02-04 02:29:26.776761 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 02:29:26.776772 | orchestrator | Wednesday 04 February 2026 02:29:23 +0000 (0:00:00.153) 0:00:37.068 **** 2026-02-04 02:29:26.776783 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}}) 2026-02-04 02:29:26.776794 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8a64378d-205e-5817-b815-b641dc764843'}}) 2026-02-04 02:29:26.776805 | orchestrator | 2026-02-04 02:29:26.776815 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 02:29:26.776826 | orchestrator | Wednesday 04 February 2026 02:29:23 +0000 (0:00:00.209) 0:00:37.278 **** 2026-02-04 02:29:26.776838 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}) 2026-02-04 02:29:26.776851 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}) 2026-02-04 02:29:26.776861 | orchestrator | 2026-02-04 02:29:26.776872 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 02:29:26.776883 | orchestrator | Wednesday 04 February 2026 02:29:25 +0000 (0:00:01.839) 0:00:39.118 **** 2026-02-04 02:29:26.776894 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:26.776906 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:26.776917 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:26.776928 | orchestrator | 2026-02-04 02:29:26.776939 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 02:29:26.776949 | orchestrator | Wednesday 04 February 2026 02:29:25 +0000 (0:00:00.169) 0:00:39.287 **** 2026-02-04 02:29:26.776960 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}) 2026-02-04 02:29:26.776987 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}) 2026-02-04 02:29:32.651042 | orchestrator | 2026-02-04 02:29:32.651167 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 02:29:32.651186 | orchestrator | Wednesday 04 February 2026 02:29:26 +0000 (0:00:01.282) 0:00:40.569 **** 2026-02-04 02:29:32.651200 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:32.651215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:32.651228 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651242 | orchestrator | 2026-02-04 02:29:32.651270 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 02:29:32.651284 | orchestrator | Wednesday 04 February 2026 02:29:26 +0000 (0:00:00.163) 0:00:40.732 **** 2026-02-04 02:29:32.651296 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651308 | orchestrator | 2026-02-04 02:29:32.651321 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 02:29:32.651333 | orchestrator | Wednesday 04 February 2026 02:29:27 +0000 (0:00:00.169) 0:00:40.902 **** 2026-02-04 02:29:32.651345 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:32.651358 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:32.651370 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651382 | orchestrator | 2026-02-04 02:29:32.651394 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 02:29:32.651407 | orchestrator | Wednesday 04 February 2026 02:29:27 +0000 (0:00:00.160) 0:00:41.062 **** 2026-02-04 02:29:32.651438 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651450 | orchestrator | 2026-02-04 02:29:32.651462 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 02:29:32.651474 | orchestrator | Wednesday 04 February 2026 02:29:27 +0000 (0:00:00.148) 0:00:41.211 **** 2026-02-04 02:29:32.651487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:32.651499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:32.651511 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651525 | orchestrator | 2026-02-04 02:29:32.651538 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 02:29:32.651549 | orchestrator | Wednesday 04 February 2026 02:29:27 +0000 (0:00:00.394) 0:00:41.605 **** 2026-02-04 02:29:32.651562 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651575 | orchestrator | 2026-02-04 02:29:32.651587 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 02:29:32.651600 | orchestrator | Wednesday 04 February 2026 02:29:27 +0000 (0:00:00.151) 0:00:41.756 **** 2026-02-04 02:29:32.651614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:32.651627 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:32.651640 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651652 | orchestrator | 2026-02-04 02:29:32.651665 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 02:29:32.651705 | orchestrator | Wednesday 04 February 2026 02:29:28 +0000 (0:00:00.156) 0:00:41.913 **** 2026-02-04 02:29:32.651718 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:32.651731 | orchestrator | 2026-02-04 02:29:32.651743 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 02:29:32.651756 | orchestrator | Wednesday 04 February 2026 02:29:28 +0000 (0:00:00.145) 0:00:42.058 **** 2026-02-04 02:29:32.651769 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:32.651782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:32.651794 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651807 | orchestrator | 2026-02-04 02:29:32.651820 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 02:29:32.651832 | orchestrator | Wednesday 04 February 2026 02:29:28 +0000 (0:00:00.192) 0:00:42.251 **** 2026-02-04 02:29:32.651845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:32.651857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:32.651869 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651882 | orchestrator | 2026-02-04 02:29:32.651894 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 02:29:32.651927 | orchestrator | Wednesday 04 February 2026 02:29:28 +0000 (0:00:00.155) 0:00:42.407 **** 2026-02-04 02:29:32.651937 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:32.651946 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:32.651954 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.651963 | orchestrator | 2026-02-04 02:29:32.651971 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 02:29:32.651979 | orchestrator | Wednesday 04 February 2026 02:29:28 +0000 (0:00:00.183) 0:00:42.591 **** 2026-02-04 02:29:32.652006 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652014 | orchestrator | 2026-02-04 02:29:32.652021 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 02:29:32.652029 | orchestrator | Wednesday 04 February 2026 02:29:28 +0000 (0:00:00.139) 0:00:42.730 **** 2026-02-04 02:29:32.652036 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652043 | orchestrator | 2026-02-04 02:29:32.652050 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 02:29:32.652057 | orchestrator | Wednesday 04 February 2026 02:29:29 +0000 (0:00:00.156) 0:00:42.886 **** 2026-02-04 02:29:32.652065 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652072 | orchestrator | 2026-02-04 02:29:32.652079 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 02:29:32.652086 | orchestrator | Wednesday 04 February 2026 02:29:29 +0000 (0:00:00.155) 0:00:43.041 **** 2026-02-04 02:29:32.652093 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 02:29:32.652100 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 02:29:32.652108 | orchestrator | } 2026-02-04 02:29:32.652115 | orchestrator | 2026-02-04 02:29:32.652122 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 02:29:32.652130 | orchestrator | Wednesday 04 February 2026 02:29:29 +0000 (0:00:00.164) 0:00:43.206 **** 2026-02-04 02:29:32.652137 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 02:29:32.652144 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 02:29:32.652158 | orchestrator | } 2026-02-04 02:29:32.652166 | orchestrator | 2026-02-04 02:29:32.652173 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 02:29:32.652180 | orchestrator | Wednesday 04 February 2026 02:29:29 +0000 (0:00:00.165) 0:00:43.371 **** 2026-02-04 02:29:32.652187 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 02:29:32.652194 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 02:29:32.652202 | orchestrator | } 2026-02-04 02:29:32.652209 | orchestrator | 2026-02-04 02:29:32.652216 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 02:29:32.652224 | orchestrator | Wednesday 04 February 2026 02:29:29 +0000 (0:00:00.391) 0:00:43.763 **** 2026-02-04 02:29:32.652231 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:32.652238 | orchestrator | 2026-02-04 02:29:32.652245 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 02:29:32.652252 | orchestrator | Wednesday 04 February 2026 02:29:30 +0000 (0:00:00.528) 0:00:44.291 **** 2026-02-04 02:29:32.652259 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:32.652266 | orchestrator | 2026-02-04 02:29:32.652273 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 02:29:32.652280 | orchestrator | Wednesday 04 February 2026 02:29:31 +0000 (0:00:00.522) 0:00:44.814 **** 2026-02-04 02:29:32.652288 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:32.652295 | orchestrator | 2026-02-04 02:29:32.652302 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 02:29:32.652309 | orchestrator | Wednesday 04 February 2026 02:29:31 +0000 (0:00:00.498) 0:00:45.312 **** 2026-02-04 02:29:32.652316 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:32.652323 | orchestrator | 2026-02-04 02:29:32.652330 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 02:29:32.652337 | orchestrator | Wednesday 04 February 2026 02:29:31 +0000 (0:00:00.169) 0:00:45.482 **** 2026-02-04 02:29:32.652344 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652351 | orchestrator | 2026-02-04 02:29:32.652358 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 02:29:32.652366 | orchestrator | Wednesday 04 February 2026 02:29:31 +0000 (0:00:00.126) 0:00:45.608 **** 2026-02-04 02:29:32.652373 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652380 | orchestrator | 2026-02-04 02:29:32.652387 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 02:29:32.652394 | orchestrator | Wednesday 04 February 2026 02:29:31 +0000 (0:00:00.119) 0:00:45.727 **** 2026-02-04 02:29:32.652401 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 02:29:32.652408 | orchestrator |  "vgs_report": { 2026-02-04 02:29:32.652433 | orchestrator |  "vg": [] 2026-02-04 02:29:32.652441 | orchestrator |  } 2026-02-04 02:29:32.652449 | orchestrator | } 2026-02-04 02:29:32.652456 | orchestrator | 2026-02-04 02:29:32.652463 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 02:29:32.652470 | orchestrator | Wednesday 04 February 2026 02:29:32 +0000 (0:00:00.162) 0:00:45.890 **** 2026-02-04 02:29:32.652477 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652484 | orchestrator | 2026-02-04 02:29:32.652491 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 02:29:32.652498 | orchestrator | Wednesday 04 February 2026 02:29:32 +0000 (0:00:00.141) 0:00:46.032 **** 2026-02-04 02:29:32.652505 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652513 | orchestrator | 2026-02-04 02:29:32.652520 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 02:29:32.652527 | orchestrator | Wednesday 04 February 2026 02:29:32 +0000 (0:00:00.139) 0:00:46.172 **** 2026-02-04 02:29:32.652534 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652541 | orchestrator | 2026-02-04 02:29:32.652548 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 02:29:32.652555 | orchestrator | Wednesday 04 February 2026 02:29:32 +0000 (0:00:00.131) 0:00:46.303 **** 2026-02-04 02:29:32.652568 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:32.652575 | orchestrator | 2026-02-04 02:29:32.652588 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 02:29:37.613284 | orchestrator | Wednesday 04 February 2026 02:29:32 +0000 (0:00:00.143) 0:00:46.447 **** 2026-02-04 02:29:37.613368 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613377 | orchestrator | 2026-02-04 02:29:37.613385 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 02:29:37.613392 | orchestrator | Wednesday 04 February 2026 02:29:32 +0000 (0:00:00.341) 0:00:46.789 **** 2026-02-04 02:29:37.613398 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613405 | orchestrator | 2026-02-04 02:29:37.613411 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 02:29:37.613435 | orchestrator | Wednesday 04 February 2026 02:29:33 +0000 (0:00:00.174) 0:00:46.963 **** 2026-02-04 02:29:37.613445 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613454 | orchestrator | 2026-02-04 02:29:37.613477 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 02:29:37.613483 | orchestrator | Wednesday 04 February 2026 02:29:33 +0000 (0:00:00.147) 0:00:47.111 **** 2026-02-04 02:29:37.613489 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613495 | orchestrator | 2026-02-04 02:29:37.613501 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 02:29:37.613507 | orchestrator | Wednesday 04 February 2026 02:29:33 +0000 (0:00:00.139) 0:00:47.250 **** 2026-02-04 02:29:37.613512 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613518 | orchestrator | 2026-02-04 02:29:37.613524 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 02:29:37.613530 | orchestrator | Wednesday 04 February 2026 02:29:33 +0000 (0:00:00.141) 0:00:47.392 **** 2026-02-04 02:29:37.613536 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613542 | orchestrator | 2026-02-04 02:29:37.613548 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 02:29:37.613554 | orchestrator | Wednesday 04 February 2026 02:29:33 +0000 (0:00:00.142) 0:00:47.535 **** 2026-02-04 02:29:37.613560 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613566 | orchestrator | 2026-02-04 02:29:37.613572 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 02:29:37.613577 | orchestrator | Wednesday 04 February 2026 02:29:33 +0000 (0:00:00.134) 0:00:47.669 **** 2026-02-04 02:29:37.613583 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613589 | orchestrator | 2026-02-04 02:29:37.613595 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 02:29:37.613600 | orchestrator | Wednesday 04 February 2026 02:29:34 +0000 (0:00:00.137) 0:00:47.807 **** 2026-02-04 02:29:37.613606 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613612 | orchestrator | 2026-02-04 02:29:37.613618 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 02:29:37.613624 | orchestrator | Wednesday 04 February 2026 02:29:34 +0000 (0:00:00.148) 0:00:47.956 **** 2026-02-04 02:29:37.613629 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613635 | orchestrator | 2026-02-04 02:29:37.613641 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 02:29:37.613647 | orchestrator | Wednesday 04 February 2026 02:29:34 +0000 (0:00:00.145) 0:00:48.101 **** 2026-02-04 02:29:37.613654 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.613661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.613667 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613673 | orchestrator | 2026-02-04 02:29:37.613679 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 02:29:37.613702 | orchestrator | Wednesday 04 February 2026 02:29:34 +0000 (0:00:00.162) 0:00:48.264 **** 2026-02-04 02:29:37.613708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.613714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.613720 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613726 | orchestrator | 2026-02-04 02:29:37.613731 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 02:29:37.613737 | orchestrator | Wednesday 04 February 2026 02:29:34 +0000 (0:00:00.157) 0:00:48.422 **** 2026-02-04 02:29:37.613743 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.613749 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.613754 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613761 | orchestrator | 2026-02-04 02:29:37.613767 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 02:29:37.613772 | orchestrator | Wednesday 04 February 2026 02:29:35 +0000 (0:00:00.393) 0:00:48.816 **** 2026-02-04 02:29:37.613778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.613784 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.613790 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613796 | orchestrator | 2026-02-04 02:29:37.613815 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 02:29:37.613821 | orchestrator | Wednesday 04 February 2026 02:29:35 +0000 (0:00:00.168) 0:00:48.984 **** 2026-02-04 02:29:37.613828 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.613835 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.613842 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613849 | orchestrator | 2026-02-04 02:29:37.613859 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 02:29:37.613866 | orchestrator | Wednesday 04 February 2026 02:29:35 +0000 (0:00:00.184) 0:00:49.169 **** 2026-02-04 02:29:37.613873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.613879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.613886 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613892 | orchestrator | 2026-02-04 02:29:37.613899 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 02:29:37.613906 | orchestrator | Wednesday 04 February 2026 02:29:35 +0000 (0:00:00.164) 0:00:49.334 **** 2026-02-04 02:29:37.613912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.613919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.613925 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613937 | orchestrator | 2026-02-04 02:29:37.613944 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 02:29:37.613951 | orchestrator | Wednesday 04 February 2026 02:29:35 +0000 (0:00:00.170) 0:00:49.504 **** 2026-02-04 02:29:37.613957 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.613964 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.613971 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.613979 | orchestrator | 2026-02-04 02:29:37.613985 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 02:29:37.613992 | orchestrator | Wednesday 04 February 2026 02:29:35 +0000 (0:00:00.173) 0:00:49.677 **** 2026-02-04 02:29:37.613999 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:37.614006 | orchestrator | 2026-02-04 02:29:37.614012 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 02:29:37.614059 | orchestrator | Wednesday 04 February 2026 02:29:36 +0000 (0:00:00.534) 0:00:50.212 **** 2026-02-04 02:29:37.614066 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:37.614072 | orchestrator | 2026-02-04 02:29:37.614079 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 02:29:37.614087 | orchestrator | Wednesday 04 February 2026 02:29:36 +0000 (0:00:00.514) 0:00:50.726 **** 2026-02-04 02:29:37.614098 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:29:37.614108 | orchestrator | 2026-02-04 02:29:37.614117 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 02:29:37.614127 | orchestrator | Wednesday 04 February 2026 02:29:37 +0000 (0:00:00.144) 0:00:50.870 **** 2026-02-04 02:29:37.614137 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'vg_name': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}) 2026-02-04 02:29:37.614149 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'vg_name': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}) 2026-02-04 02:29:37.614160 | orchestrator | 2026-02-04 02:29:37.614169 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 02:29:37.614175 | orchestrator | Wednesday 04 February 2026 02:29:37 +0000 (0:00:00.169) 0:00:51.040 **** 2026-02-04 02:29:37.614181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.614187 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:37.614193 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:37.614199 | orchestrator | 2026-02-04 02:29:37.614205 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 02:29:37.614210 | orchestrator | Wednesday 04 February 2026 02:29:37 +0000 (0:00:00.188) 0:00:51.228 **** 2026-02-04 02:29:37.614216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:37.614241 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:44.441901 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:44.441990 | orchestrator | 2026-02-04 02:29:44.442001 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 02:29:44.442010 | orchestrator | Wednesday 04 February 2026 02:29:37 +0000 (0:00:00.181) 0:00:51.410 **** 2026-02-04 02:29:44.442065 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 02:29:44.442105 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 02:29:44.442113 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:29:44.442119 | orchestrator | 2026-02-04 02:29:44.442126 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 02:29:44.442132 | orchestrator | Wednesday 04 February 2026 02:29:37 +0000 (0:00:00.387) 0:00:51.798 **** 2026-02-04 02:29:44.442138 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 02:29:44.442145 | orchestrator |  "lvm_report": { 2026-02-04 02:29:44.442153 | orchestrator |  "lv": [ 2026-02-04 02:29:44.442160 | orchestrator |  { 2026-02-04 02:29:44.442167 | orchestrator |  "lv_name": "osd-block-8a64378d-205e-5817-b815-b641dc764843", 2026-02-04 02:29:44.442173 | orchestrator |  "vg_name": "ceph-8a64378d-205e-5817-b815-b641dc764843" 2026-02-04 02:29:44.442180 | orchestrator |  }, 2026-02-04 02:29:44.442186 | orchestrator |  { 2026-02-04 02:29:44.442192 | orchestrator |  "lv_name": "osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c", 2026-02-04 02:29:44.442199 | orchestrator |  "vg_name": "ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c" 2026-02-04 02:29:44.442205 | orchestrator |  } 2026-02-04 02:29:44.442211 | orchestrator |  ], 2026-02-04 02:29:44.442217 | orchestrator |  "pv": [ 2026-02-04 02:29:44.442224 | orchestrator |  { 2026-02-04 02:29:44.442230 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 02:29:44.442236 | orchestrator |  "vg_name": "ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c" 2026-02-04 02:29:44.442243 | orchestrator |  }, 2026-02-04 02:29:44.442250 | orchestrator |  { 2026-02-04 02:29:44.442256 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 02:29:44.442262 | orchestrator |  "vg_name": "ceph-8a64378d-205e-5817-b815-b641dc764843" 2026-02-04 02:29:44.442268 | orchestrator |  } 2026-02-04 02:29:44.442275 | orchestrator |  ] 2026-02-04 02:29:44.442281 | orchestrator |  } 2026-02-04 02:29:44.442287 | orchestrator | } 2026-02-04 02:29:44.442294 | orchestrator | 2026-02-04 02:29:44.442300 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-04 02:29:44.442306 | orchestrator | 2026-02-04 02:29:44.442313 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 02:29:44.442319 | orchestrator | Wednesday 04 February 2026 02:29:38 +0000 (0:00:00.311) 0:00:52.109 **** 2026-02-04 02:29:44.442325 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-04 02:29:44.442332 | orchestrator | 2026-02-04 02:29:44.442338 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-04 02:29:44.442345 | orchestrator | Wednesday 04 February 2026 02:29:38 +0000 (0:00:00.252) 0:00:52.362 **** 2026-02-04 02:29:44.442351 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:29:44.442357 | orchestrator | 2026-02-04 02:29:44.442364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442370 | orchestrator | Wednesday 04 February 2026 02:29:38 +0000 (0:00:00.247) 0:00:52.610 **** 2026-02-04 02:29:44.442376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-04 02:29:44.442382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-04 02:29:44.442388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-04 02:29:44.442394 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-04 02:29:44.442401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-04 02:29:44.442407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-04 02:29:44.442413 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-04 02:29:44.442449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-04 02:29:44.442457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-04 02:29:44.442465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-04 02:29:44.442472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-04 02:29:44.442479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-04 02:29:44.442487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-04 02:29:44.442494 | orchestrator | 2026-02-04 02:29:44.442501 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442509 | orchestrator | Wednesday 04 February 2026 02:29:39 +0000 (0:00:00.446) 0:00:53.056 **** 2026-02-04 02:29:44.442516 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:44.442523 | orchestrator | 2026-02-04 02:29:44.442530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442538 | orchestrator | Wednesday 04 February 2026 02:29:39 +0000 (0:00:00.245) 0:00:53.302 **** 2026-02-04 02:29:44.442545 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:44.442552 | orchestrator | 2026-02-04 02:29:44.442559 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442578 | orchestrator | Wednesday 04 February 2026 02:29:39 +0000 (0:00:00.219) 0:00:53.521 **** 2026-02-04 02:29:44.442585 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:44.442591 | orchestrator | 2026-02-04 02:29:44.442598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442604 | orchestrator | Wednesday 04 February 2026 02:29:39 +0000 (0:00:00.221) 0:00:53.743 **** 2026-02-04 02:29:44.442610 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:44.442616 | orchestrator | 2026-02-04 02:29:44.442622 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442628 | orchestrator | Wednesday 04 February 2026 02:29:40 +0000 (0:00:00.676) 0:00:54.420 **** 2026-02-04 02:29:44.442635 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:44.442641 | orchestrator | 2026-02-04 02:29:44.442647 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442654 | orchestrator | Wednesday 04 February 2026 02:29:40 +0000 (0:00:00.241) 0:00:54.661 **** 2026-02-04 02:29:44.442660 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:44.442666 | orchestrator | 2026-02-04 02:29:44.442673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442679 | orchestrator | Wednesday 04 February 2026 02:29:41 +0000 (0:00:00.220) 0:00:54.882 **** 2026-02-04 02:29:44.442685 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:44.442692 | orchestrator | 2026-02-04 02:29:44.442698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442704 | orchestrator | Wednesday 04 February 2026 02:29:41 +0000 (0:00:00.226) 0:00:55.109 **** 2026-02-04 02:29:44.442710 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:44.442716 | orchestrator | 2026-02-04 02:29:44.442722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442729 | orchestrator | Wednesday 04 February 2026 02:29:41 +0000 (0:00:00.229) 0:00:55.339 **** 2026-02-04 02:29:44.442735 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118) 2026-02-04 02:29:44.442742 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118) 2026-02-04 02:29:44.442748 | orchestrator | 2026-02-04 02:29:44.442754 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442761 | orchestrator | Wednesday 04 February 2026 02:29:41 +0000 (0:00:00.449) 0:00:55.788 **** 2026-02-04 02:29:44.442825 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675) 2026-02-04 02:29:44.442842 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675) 2026-02-04 02:29:44.442849 | orchestrator | 2026-02-04 02:29:44.442855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442862 | orchestrator | Wednesday 04 February 2026 02:29:42 +0000 (0:00:00.463) 0:00:56.252 **** 2026-02-04 02:29:44.442868 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52) 2026-02-04 02:29:44.442874 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52) 2026-02-04 02:29:44.442880 | orchestrator | 2026-02-04 02:29:44.442886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442892 | orchestrator | Wednesday 04 February 2026 02:29:42 +0000 (0:00:00.472) 0:00:56.724 **** 2026-02-04 02:29:44.442899 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b) 2026-02-04 02:29:44.442905 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b) 2026-02-04 02:29:44.442912 | orchestrator | 2026-02-04 02:29:44.442918 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-04 02:29:44.442924 | orchestrator | Wednesday 04 February 2026 02:29:43 +0000 (0:00:00.477) 0:00:57.201 **** 2026-02-04 02:29:44.442930 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-04 02:29:44.442936 | orchestrator | 2026-02-04 02:29:44.442943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:44.442949 | orchestrator | Wednesday 04 February 2026 02:29:43 +0000 (0:00:00.383) 0:00:57.585 **** 2026-02-04 02:29:44.442955 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-04 02:29:44.442961 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-04 02:29:44.442967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-04 02:29:44.442973 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-04 02:29:44.442980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-04 02:29:44.442986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-04 02:29:44.442992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-04 02:29:44.442998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-04 02:29:44.443004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-04 02:29:44.443010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-04 02:29:44.443016 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-04 02:29:44.443028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-04 02:29:55.032294 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-04 02:29:55.032414 | orchestrator | 2026-02-04 02:29:55.032505 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032520 | orchestrator | Wednesday 04 February 2026 02:29:44 +0000 (0:00:00.648) 0:00:58.233 **** 2026-02-04 02:29:55.032531 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032543 | orchestrator | 2026-02-04 02:29:55.032554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032580 | orchestrator | Wednesday 04 February 2026 02:29:44 +0000 (0:00:00.235) 0:00:58.469 **** 2026-02-04 02:29:55.032592 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032624 | orchestrator | 2026-02-04 02:29:55.032635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032644 | orchestrator | Wednesday 04 February 2026 02:29:44 +0000 (0:00:00.227) 0:00:58.697 **** 2026-02-04 02:29:55.032655 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032663 | orchestrator | 2026-02-04 02:29:55.032672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032681 | orchestrator | Wednesday 04 February 2026 02:29:45 +0000 (0:00:00.215) 0:00:58.912 **** 2026-02-04 02:29:55.032691 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032700 | orchestrator | 2026-02-04 02:29:55.032709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032719 | orchestrator | Wednesday 04 February 2026 02:29:45 +0000 (0:00:00.212) 0:00:59.124 **** 2026-02-04 02:29:55.032728 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032738 | orchestrator | 2026-02-04 02:29:55.032748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032758 | orchestrator | Wednesday 04 February 2026 02:29:45 +0000 (0:00:00.238) 0:00:59.363 **** 2026-02-04 02:29:55.032768 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032778 | orchestrator | 2026-02-04 02:29:55.032788 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032798 | orchestrator | Wednesday 04 February 2026 02:29:45 +0000 (0:00:00.222) 0:00:59.586 **** 2026-02-04 02:29:55.032809 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032819 | orchestrator | 2026-02-04 02:29:55.032828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032838 | orchestrator | Wednesday 04 February 2026 02:29:45 +0000 (0:00:00.211) 0:00:59.797 **** 2026-02-04 02:29:55.032847 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032856 | orchestrator | 2026-02-04 02:29:55.032866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032876 | orchestrator | Wednesday 04 February 2026 02:29:46 +0000 (0:00:00.231) 0:01:00.029 **** 2026-02-04 02:29:55.032887 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-04 02:29:55.032898 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-04 02:29:55.032908 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-04 02:29:55.032917 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-04 02:29:55.032928 | orchestrator | 2026-02-04 02:29:55.032937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032948 | orchestrator | Wednesday 04 February 2026 02:29:47 +0000 (0:00:00.925) 0:01:00.954 **** 2026-02-04 02:29:55.032959 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.032971 | orchestrator | 2026-02-04 02:29:55.032980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.032991 | orchestrator | Wednesday 04 February 2026 02:29:47 +0000 (0:00:00.726) 0:01:01.681 **** 2026-02-04 02:29:55.033001 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033013 | orchestrator | 2026-02-04 02:29:55.033022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.033031 | orchestrator | Wednesday 04 February 2026 02:29:48 +0000 (0:00:00.235) 0:01:01.917 **** 2026-02-04 02:29:55.033041 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033051 | orchestrator | 2026-02-04 02:29:55.033061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-04 02:29:55.033073 | orchestrator | Wednesday 04 February 2026 02:29:48 +0000 (0:00:00.219) 0:01:02.136 **** 2026-02-04 02:29:55.033082 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033090 | orchestrator | 2026-02-04 02:29:55.033098 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-04 02:29:55.033106 | orchestrator | Wednesday 04 February 2026 02:29:48 +0000 (0:00:00.225) 0:01:02.361 **** 2026-02-04 02:29:55.033115 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033123 | orchestrator | 2026-02-04 02:29:55.033143 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-04 02:29:55.033151 | orchestrator | Wednesday 04 February 2026 02:29:48 +0000 (0:00:00.137) 0:01:02.499 **** 2026-02-04 02:29:55.033161 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}}) 2026-02-04 02:29:55.033170 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '43734a2f-bb9f-5443-b704-3f4971f68639'}}) 2026-02-04 02:29:55.033178 | orchestrator | 2026-02-04 02:29:55.033187 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-04 02:29:55.033195 | orchestrator | Wednesday 04 February 2026 02:29:48 +0000 (0:00:00.236) 0:01:02.735 **** 2026-02-04 02:29:55.033205 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}) 2026-02-04 02:29:55.033214 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}) 2026-02-04 02:29:55.033224 | orchestrator | 2026-02-04 02:29:55.033233 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-04 02:29:55.033261 | orchestrator | Wednesday 04 February 2026 02:29:51 +0000 (0:00:02.931) 0:01:05.667 **** 2026-02-04 02:29:55.033271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:29:55.033282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:29:55.033292 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033300 | orchestrator | 2026-02-04 02:29:55.033317 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-04 02:29:55.033327 | orchestrator | Wednesday 04 February 2026 02:29:52 +0000 (0:00:00.166) 0:01:05.833 **** 2026-02-04 02:29:55.033336 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}) 2026-02-04 02:29:55.033344 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}) 2026-02-04 02:29:55.033353 | orchestrator | 2026-02-04 02:29:55.033361 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-04 02:29:55.033369 | orchestrator | Wednesday 04 February 2026 02:29:53 +0000 (0:00:01.340) 0:01:07.174 **** 2026-02-04 02:29:55.033377 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:29:55.033385 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:29:55.033394 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033402 | orchestrator | 2026-02-04 02:29:55.033410 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-04 02:29:55.033418 | orchestrator | Wednesday 04 February 2026 02:29:53 +0000 (0:00:00.158) 0:01:07.333 **** 2026-02-04 02:29:55.033449 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033458 | orchestrator | 2026-02-04 02:29:55.033467 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-04 02:29:55.033476 | orchestrator | Wednesday 04 February 2026 02:29:53 +0000 (0:00:00.146) 0:01:07.479 **** 2026-02-04 02:29:55.033486 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:29:55.033495 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:29:55.033515 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033524 | orchestrator | 2026-02-04 02:29:55.033534 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-04 02:29:55.033544 | orchestrator | Wednesday 04 February 2026 02:29:54 +0000 (0:00:00.383) 0:01:07.863 **** 2026-02-04 02:29:55.033553 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033563 | orchestrator | 2026-02-04 02:29:55.033572 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-04 02:29:55.033582 | orchestrator | Wednesday 04 February 2026 02:29:54 +0000 (0:00:00.163) 0:01:08.027 **** 2026-02-04 02:29:55.033591 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:29:55.033601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:29:55.033611 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033620 | orchestrator | 2026-02-04 02:29:55.033629 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-04 02:29:55.033638 | orchestrator | Wednesday 04 February 2026 02:29:54 +0000 (0:00:00.165) 0:01:08.192 **** 2026-02-04 02:29:55.033646 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033655 | orchestrator | 2026-02-04 02:29:55.033663 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-04 02:29:55.033670 | orchestrator | Wednesday 04 February 2026 02:29:54 +0000 (0:00:00.144) 0:01:08.337 **** 2026-02-04 02:29:55.033679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:29:55.033688 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:29:55.033696 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:29:55.033705 | orchestrator | 2026-02-04 02:29:55.033715 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-04 02:29:55.033724 | orchestrator | Wednesday 04 February 2026 02:29:54 +0000 (0:00:00.174) 0:01:08.511 **** 2026-02-04 02:29:55.033732 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:29:55.033741 | orchestrator | 2026-02-04 02:29:55.033750 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-04 02:29:55.033758 | orchestrator | Wednesday 04 February 2026 02:29:54 +0000 (0:00:00.152) 0:01:08.663 **** 2026-02-04 02:29:55.033778 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:01.790269 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:01.790394 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.790414 | orchestrator | 2026-02-04 02:30:01.790485 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-04 02:30:01.790504 | orchestrator | Wednesday 04 February 2026 02:29:55 +0000 (0:00:00.162) 0:01:08.826 **** 2026-02-04 02:30:01.790540 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:01.790555 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:01.790569 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.790582 | orchestrator | 2026-02-04 02:30:01.790597 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-04 02:30:01.790611 | orchestrator | Wednesday 04 February 2026 02:29:55 +0000 (0:00:00.161) 0:01:08.987 **** 2026-02-04 02:30:01.790652 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:01.790661 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:01.790674 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.790693 | orchestrator | 2026-02-04 02:30:01.790709 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-04 02:30:01.790723 | orchestrator | Wednesday 04 February 2026 02:29:55 +0000 (0:00:00.190) 0:01:09.178 **** 2026-02-04 02:30:01.790736 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.790750 | orchestrator | 2026-02-04 02:30:01.790763 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-04 02:30:01.790777 | orchestrator | Wednesday 04 February 2026 02:29:55 +0000 (0:00:00.147) 0:01:09.325 **** 2026-02-04 02:30:01.790790 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.790805 | orchestrator | 2026-02-04 02:30:01.790818 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-04 02:30:01.790832 | orchestrator | Wednesday 04 February 2026 02:29:55 +0000 (0:00:00.138) 0:01:09.463 **** 2026-02-04 02:30:01.790846 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.790859 | orchestrator | 2026-02-04 02:30:01.790873 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-04 02:30:01.790883 | orchestrator | Wednesday 04 February 2026 02:29:56 +0000 (0:00:00.367) 0:01:09.831 **** 2026-02-04 02:30:01.790891 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 02:30:01.790900 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-04 02:30:01.790908 | orchestrator | } 2026-02-04 02:30:01.790916 | orchestrator | 2026-02-04 02:30:01.790924 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-04 02:30:01.790932 | orchestrator | Wednesday 04 February 2026 02:29:56 +0000 (0:00:00.167) 0:01:09.998 **** 2026-02-04 02:30:01.790939 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 02:30:01.790947 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-04 02:30:01.790955 | orchestrator | } 2026-02-04 02:30:01.790963 | orchestrator | 2026-02-04 02:30:01.790971 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-04 02:30:01.790979 | orchestrator | Wednesday 04 February 2026 02:29:56 +0000 (0:00:00.153) 0:01:10.151 **** 2026-02-04 02:30:01.790987 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 02:30:01.790995 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-04 02:30:01.791003 | orchestrator | } 2026-02-04 02:30:01.791011 | orchestrator | 2026-02-04 02:30:01.791019 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-04 02:30:01.791027 | orchestrator | Wednesday 04 February 2026 02:29:56 +0000 (0:00:00.165) 0:01:10.317 **** 2026-02-04 02:30:01.791034 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:01.791043 | orchestrator | 2026-02-04 02:30:01.791050 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-04 02:30:01.791058 | orchestrator | Wednesday 04 February 2026 02:29:57 +0000 (0:00:00.528) 0:01:10.845 **** 2026-02-04 02:30:01.791066 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:01.791074 | orchestrator | 2026-02-04 02:30:01.791082 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-04 02:30:01.791090 | orchestrator | Wednesday 04 February 2026 02:29:57 +0000 (0:00:00.518) 0:01:11.364 **** 2026-02-04 02:30:01.791098 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:01.791105 | orchestrator | 2026-02-04 02:30:01.791113 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-04 02:30:01.791121 | orchestrator | Wednesday 04 February 2026 02:29:58 +0000 (0:00:00.511) 0:01:11.875 **** 2026-02-04 02:30:01.791129 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:01.791137 | orchestrator | 2026-02-04 02:30:01.791145 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-04 02:30:01.791161 | orchestrator | Wednesday 04 February 2026 02:29:58 +0000 (0:00:00.158) 0:01:12.033 **** 2026-02-04 02:30:01.791169 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791177 | orchestrator | 2026-02-04 02:30:01.791185 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-04 02:30:01.791193 | orchestrator | Wednesday 04 February 2026 02:29:58 +0000 (0:00:00.117) 0:01:12.151 **** 2026-02-04 02:30:01.791201 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791209 | orchestrator | 2026-02-04 02:30:01.791217 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-04 02:30:01.791225 | orchestrator | Wednesday 04 February 2026 02:29:58 +0000 (0:00:00.111) 0:01:12.262 **** 2026-02-04 02:30:01.791232 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 02:30:01.791241 | orchestrator |  "vgs_report": { 2026-02-04 02:30:01.791249 | orchestrator |  "vg": [] 2026-02-04 02:30:01.791276 | orchestrator |  } 2026-02-04 02:30:01.791286 | orchestrator | } 2026-02-04 02:30:01.791294 | orchestrator | 2026-02-04 02:30:01.791302 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-04 02:30:01.791310 | orchestrator | Wednesday 04 February 2026 02:29:58 +0000 (0:00:00.168) 0:01:12.430 **** 2026-02-04 02:30:01.791318 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791325 | orchestrator | 2026-02-04 02:30:01.791333 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-04 02:30:01.791341 | orchestrator | Wednesday 04 February 2026 02:29:58 +0000 (0:00:00.144) 0:01:12.575 **** 2026-02-04 02:30:01.791354 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791363 | orchestrator | 2026-02-04 02:30:01.791370 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-04 02:30:01.791378 | orchestrator | Wednesday 04 February 2026 02:29:59 +0000 (0:00:00.369) 0:01:12.944 **** 2026-02-04 02:30:01.791386 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791395 | orchestrator | 2026-02-04 02:30:01.791409 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-04 02:30:01.791422 | orchestrator | Wednesday 04 February 2026 02:29:59 +0000 (0:00:00.141) 0:01:13.086 **** 2026-02-04 02:30:01.791499 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791513 | orchestrator | 2026-02-04 02:30:01.791526 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-04 02:30:01.791540 | orchestrator | Wednesday 04 February 2026 02:29:59 +0000 (0:00:00.155) 0:01:13.241 **** 2026-02-04 02:30:01.791554 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791567 | orchestrator | 2026-02-04 02:30:01.791599 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-04 02:30:01.791616 | orchestrator | Wednesday 04 February 2026 02:29:59 +0000 (0:00:00.158) 0:01:13.400 **** 2026-02-04 02:30:01.791625 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791633 | orchestrator | 2026-02-04 02:30:01.791641 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-04 02:30:01.791648 | orchestrator | Wednesday 04 February 2026 02:29:59 +0000 (0:00:00.208) 0:01:13.609 **** 2026-02-04 02:30:01.791656 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791667 | orchestrator | 2026-02-04 02:30:01.791680 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-04 02:30:01.791693 | orchestrator | Wednesday 04 February 2026 02:29:59 +0000 (0:00:00.177) 0:01:13.786 **** 2026-02-04 02:30:01.791706 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791719 | orchestrator | 2026-02-04 02:30:01.791733 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-04 02:30:01.791747 | orchestrator | Wednesday 04 February 2026 02:30:00 +0000 (0:00:00.166) 0:01:13.952 **** 2026-02-04 02:30:01.791761 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791773 | orchestrator | 2026-02-04 02:30:01.791787 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-04 02:30:01.791800 | orchestrator | Wednesday 04 February 2026 02:30:00 +0000 (0:00:00.136) 0:01:14.088 **** 2026-02-04 02:30:01.791825 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791834 | orchestrator | 2026-02-04 02:30:01.791842 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-04 02:30:01.791850 | orchestrator | Wednesday 04 February 2026 02:30:00 +0000 (0:00:00.156) 0:01:14.245 **** 2026-02-04 02:30:01.791858 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791866 | orchestrator | 2026-02-04 02:30:01.791873 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-04 02:30:01.791882 | orchestrator | Wednesday 04 February 2026 02:30:00 +0000 (0:00:00.143) 0:01:14.389 **** 2026-02-04 02:30:01.791889 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791898 | orchestrator | 2026-02-04 02:30:01.791905 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-04 02:30:01.791913 | orchestrator | Wednesday 04 February 2026 02:30:00 +0000 (0:00:00.148) 0:01:14.537 **** 2026-02-04 02:30:01.791921 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791929 | orchestrator | 2026-02-04 02:30:01.791937 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-04 02:30:01.791955 | orchestrator | Wednesday 04 February 2026 02:30:01 +0000 (0:00:00.382) 0:01:14.920 **** 2026-02-04 02:30:01.791963 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.791971 | orchestrator | 2026-02-04 02:30:01.791979 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-04 02:30:01.791987 | orchestrator | Wednesday 04 February 2026 02:30:01 +0000 (0:00:00.144) 0:01:15.064 **** 2026-02-04 02:30:01.791995 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:01.792003 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:01.792011 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.792019 | orchestrator | 2026-02-04 02:30:01.792027 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-04 02:30:01.792034 | orchestrator | Wednesday 04 February 2026 02:30:01 +0000 (0:00:00.184) 0:01:15.249 **** 2026-02-04 02:30:01.792042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:01.792050 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:01.792062 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:01.792075 | orchestrator | 2026-02-04 02:30:01.792089 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-04 02:30:01.792104 | orchestrator | Wednesday 04 February 2026 02:30:01 +0000 (0:00:00.169) 0:01:15.418 **** 2026-02-04 02:30:01.792130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.985577 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.985685 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.985703 | orchestrator | 2026-02-04 02:30:04.985735 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-04 02:30:04.985748 | orchestrator | Wednesday 04 February 2026 02:30:01 +0000 (0:00:00.170) 0:01:15.589 **** 2026-02-04 02:30:04.985759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.985771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.985808 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.985820 | orchestrator | 2026-02-04 02:30:04.985832 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-04 02:30:04.985844 | orchestrator | Wednesday 04 February 2026 02:30:01 +0000 (0:00:00.172) 0:01:15.761 **** 2026-02-04 02:30:04.985857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.985868 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.985880 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.985892 | orchestrator | 2026-02-04 02:30:04.985903 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-04 02:30:04.985915 | orchestrator | Wednesday 04 February 2026 02:30:02 +0000 (0:00:00.188) 0:01:15.950 **** 2026-02-04 02:30:04.985927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.985938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.985950 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.985961 | orchestrator | 2026-02-04 02:30:04.985973 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-04 02:30:04.985985 | orchestrator | Wednesday 04 February 2026 02:30:02 +0000 (0:00:00.178) 0:01:16.128 **** 2026-02-04 02:30:04.985996 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.986008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.986079 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.986093 | orchestrator | 2026-02-04 02:30:04.986107 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-04 02:30:04.986121 | orchestrator | Wednesday 04 February 2026 02:30:02 +0000 (0:00:00.166) 0:01:16.295 **** 2026-02-04 02:30:04.986134 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.986149 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.986162 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.986175 | orchestrator | 2026-02-04 02:30:04.986189 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-04 02:30:04.986201 | orchestrator | Wednesday 04 February 2026 02:30:02 +0000 (0:00:00.149) 0:01:16.445 **** 2026-02-04 02:30:04.986214 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:04.986228 | orchestrator | 2026-02-04 02:30:04.986242 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-04 02:30:04.986256 | orchestrator | Wednesday 04 February 2026 02:30:03 +0000 (0:00:00.539) 0:01:16.984 **** 2026-02-04 02:30:04.986270 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:04.986283 | orchestrator | 2026-02-04 02:30:04.986297 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-04 02:30:04.986312 | orchestrator | Wednesday 04 February 2026 02:30:03 +0000 (0:00:00.761) 0:01:17.746 **** 2026-02-04 02:30:04.986325 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:04.986339 | orchestrator | 2026-02-04 02:30:04.986353 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-04 02:30:04.986366 | orchestrator | Wednesday 04 February 2026 02:30:04 +0000 (0:00:00.159) 0:01:17.905 **** 2026-02-04 02:30:04.986389 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'vg_name': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}) 2026-02-04 02:30:04.986404 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'vg_name': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}) 2026-02-04 02:30:04.986418 | orchestrator | 2026-02-04 02:30:04.986449 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-04 02:30:04.986461 | orchestrator | Wednesday 04 February 2026 02:30:04 +0000 (0:00:00.184) 0:01:18.090 **** 2026-02-04 02:30:04.986492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.986511 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.986524 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.986535 | orchestrator | 2026-02-04 02:30:04.986575 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-04 02:30:04.986587 | orchestrator | Wednesday 04 February 2026 02:30:04 +0000 (0:00:00.173) 0:01:18.264 **** 2026-02-04 02:30:04.986599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.986611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.986623 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.986634 | orchestrator | 2026-02-04 02:30:04.986646 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-04 02:30:04.986658 | orchestrator | Wednesday 04 February 2026 02:30:04 +0000 (0:00:00.168) 0:01:18.433 **** 2026-02-04 02:30:04.986669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 02:30:04.986681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 02:30:04.986693 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:04.986705 | orchestrator | 2026-02-04 02:30:04.986717 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-04 02:30:04.986727 | orchestrator | Wednesday 04 February 2026 02:30:04 +0000 (0:00:00.162) 0:01:18.596 **** 2026-02-04 02:30:04.986738 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 02:30:04.986764 | orchestrator |  "lvm_report": { 2026-02-04 02:30:04.986777 | orchestrator |  "lv": [ 2026-02-04 02:30:04.986790 | orchestrator |  { 2026-02-04 02:30:04.986802 | orchestrator |  "lv_name": "osd-block-43734a2f-bb9f-5443-b704-3f4971f68639", 2026-02-04 02:30:04.986815 | orchestrator |  "vg_name": "ceph-43734a2f-bb9f-5443-b704-3f4971f68639" 2026-02-04 02:30:04.986826 | orchestrator |  }, 2026-02-04 02:30:04.986838 | orchestrator |  { 2026-02-04 02:30:04.986849 | orchestrator |  "lv_name": "osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af", 2026-02-04 02:30:04.986861 | orchestrator |  "vg_name": "ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af" 2026-02-04 02:30:04.986873 | orchestrator |  } 2026-02-04 02:30:04.986884 | orchestrator |  ], 2026-02-04 02:30:04.986896 | orchestrator |  "pv": [ 2026-02-04 02:30:04.986908 | orchestrator |  { 2026-02-04 02:30:04.986920 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-04 02:30:04.986932 | orchestrator |  "vg_name": "ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af" 2026-02-04 02:30:04.986944 | orchestrator |  }, 2026-02-04 02:30:04.986955 | orchestrator |  { 2026-02-04 02:30:04.986967 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-04 02:30:04.986991 | orchestrator |  "vg_name": "ceph-43734a2f-bb9f-5443-b704-3f4971f68639" 2026-02-04 02:30:04.987002 | orchestrator |  } 2026-02-04 02:30:04.987014 | orchestrator |  ] 2026-02-04 02:30:04.987026 | orchestrator |  } 2026-02-04 02:30:04.987038 | orchestrator | } 2026-02-04 02:30:04.987050 | orchestrator | 2026-02-04 02:30:04.987062 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:30:04.987074 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 02:30:04.987086 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 02:30:04.987098 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-04 02:30:04.987109 | orchestrator | 2026-02-04 02:30:04.987121 | orchestrator | 2026-02-04 02:30:04.987133 | orchestrator | 2026-02-04 02:30:04.987144 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:30:04.987156 | orchestrator | Wednesday 04 February 2026 02:30:04 +0000 (0:00:00.168) 0:01:18.765 **** 2026-02-04 02:30:04.987167 | orchestrator | =============================================================================== 2026-02-04 02:30:04.987179 | orchestrator | Create block VGs -------------------------------------------------------- 6.78s 2026-02-04 02:30:04.987190 | orchestrator | Create block LVs -------------------------------------------------------- 4.12s 2026-02-04 02:30:04.987202 | orchestrator | Add known partitions to the list of available block devices ------------- 2.05s 2026-02-04 02:30:04.987214 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.79s 2026-02-04 02:30:04.987225 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.75s 2026-02-04 02:30:04.987237 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.60s 2026-02-04 02:30:04.987248 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.56s 2026-02-04 02:30:04.987260 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.55s 2026-02-04 02:30:04.987278 | orchestrator | Add known links to the list of available block devices ------------------ 1.45s 2026-02-04 02:30:05.402516 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2026-02-04 02:30:05.402599 | orchestrator | Add known partitions to the list of available block devices ------------- 0.93s 2026-02-04 02:30:05.402608 | orchestrator | Fail if number of OSDs exceeds num_osds for a DB+WAL VG ----------------- 0.89s 2026-02-04 02:30:05.402632 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2026-02-04 02:30:05.402638 | orchestrator | Calculate size needed for LVs on ceph_db_devices ------------------------ 0.88s 2026-02-04 02:30:05.402644 | orchestrator | Print LVM report data --------------------------------------------------- 0.80s 2026-02-04 02:30:05.402650 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-02-04 02:30:05.402656 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-02-04 02:30:05.402662 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.74s 2026-02-04 02:30:05.402668 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.74s 2026-02-04 02:30:05.402674 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-02-04 02:30:17.851724 | orchestrator | 2026-02-04 02:30:17 | INFO  | Task acb3f3b9-a23d-4f65-ae32-43a74ab692de (facts) was prepared for execution. 2026-02-04 02:30:17.851872 | orchestrator | 2026-02-04 02:30:17 | INFO  | It takes a moment until task acb3f3b9-a23d-4f65-ae32-43a74ab692de (facts) has been started and output is visible here. 2026-02-04 02:30:31.333554 | orchestrator | 2026-02-04 02:30:31.333669 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-04 02:30:31.333710 | orchestrator | 2026-02-04 02:30:31.333722 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 02:30:31.333733 | orchestrator | Wednesday 04 February 2026 02:30:22 +0000 (0:00:00.284) 0:00:00.284 **** 2026-02-04 02:30:31.333743 | orchestrator | ok: [testbed-manager] 2026-02-04 02:30:31.333755 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:31.333765 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:31.333775 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:31.333785 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:31.333795 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:31.333806 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:31.333816 | orchestrator | 2026-02-04 02:30:31.333826 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 02:30:31.333837 | orchestrator | Wednesday 04 February 2026 02:30:23 +0000 (0:00:01.175) 0:00:01.459 **** 2026-02-04 02:30:31.333847 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:30:31.333859 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:30:31.333869 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:30:31.333879 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:30:31.333889 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:30:31.333899 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:30:31.333910 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:31.333920 | orchestrator | 2026-02-04 02:30:31.333931 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 02:30:31.333941 | orchestrator | 2026-02-04 02:30:31.333951 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 02:30:31.333962 | orchestrator | Wednesday 04 February 2026 02:30:24 +0000 (0:00:01.490) 0:00:02.950 **** 2026-02-04 02:30:31.333972 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:31.333982 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:31.333992 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:31.334003 | orchestrator | ok: [testbed-manager] 2026-02-04 02:30:31.334013 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:31.334082 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:31.334095 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:31.334106 | orchestrator | 2026-02-04 02:30:31.334118 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 02:30:31.334129 | orchestrator | 2026-02-04 02:30:31.334140 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 02:30:31.334151 | orchestrator | Wednesday 04 February 2026 02:30:30 +0000 (0:00:05.347) 0:00:08.298 **** 2026-02-04 02:30:31.334163 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:30:31.334175 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:30:31.334186 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:30:31.334197 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:30:31.334209 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:30:31.334220 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:30:31.334231 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:31.334242 | orchestrator | 2026-02-04 02:30:31.334253 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:30:31.334265 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:30:31.334278 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:30:31.334290 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:30:31.334302 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:30:31.334314 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:30:31.334332 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:30:31.334344 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:30:31.334356 | orchestrator | 2026-02-04 02:30:31.334367 | orchestrator | 2026-02-04 02:30:31.334379 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:30:31.334405 | orchestrator | Wednesday 04 February 2026 02:30:30 +0000 (0:00:00.562) 0:00:08.861 **** 2026-02-04 02:30:31.334417 | orchestrator | =============================================================================== 2026-02-04 02:30:31.334426 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2026-02-04 02:30:31.334436 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.49s 2026-02-04 02:30:31.334472 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-02-04 02:30:31.334482 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2026-02-04 02:30:33.846163 | orchestrator | 2026-02-04 02:30:33 | INFO  | Task 40b92eaf-6436-4e20-a5ba-8295dd3d502a (ceph) was prepared for execution. 2026-02-04 02:30:33.846288 | orchestrator | 2026-02-04 02:30:33 | INFO  | It takes a moment until task 40b92eaf-6436-4e20-a5ba-8295dd3d502a (ceph) has been started and output is visible here. 2026-02-04 02:30:51.975938 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 02:30:51.976075 | orchestrator | 2.16.14 2026-02-04 02:30:51.976096 | orchestrator | 2026-02-04 02:30:51.976110 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-04 02:30:51.976122 | orchestrator | 2026-02-04 02:30:51.976133 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 02:30:51.976144 | orchestrator | Wednesday 04 February 2026 02:30:38 +0000 (0:00:00.799) 0:00:00.799 **** 2026-02-04 02:30:51.976156 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:30:51.976168 | orchestrator | 2026-02-04 02:30:51.976178 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 02:30:51.976189 | orchestrator | Wednesday 04 February 2026 02:30:40 +0000 (0:00:01.145) 0:00:01.945 **** 2026-02-04 02:30:51.976201 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:51.976212 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:51.976222 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:51.976233 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:51.976244 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:51.976254 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:51.976266 | orchestrator | 2026-02-04 02:30:51.976277 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 02:30:51.976288 | orchestrator | Wednesday 04 February 2026 02:30:41 +0000 (0:00:01.258) 0:00:03.203 **** 2026-02-04 02:30:51.976299 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:51.976309 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:51.976320 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:51.976331 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:51.976342 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:51.976352 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:51.976363 | orchestrator | 2026-02-04 02:30:51.976374 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 02:30:51.976385 | orchestrator | Wednesday 04 February 2026 02:30:42 +0000 (0:00:00.789) 0:00:03.993 **** 2026-02-04 02:30:51.976395 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:51.976406 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:51.976417 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:51.976427 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:51.976527 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:51.976548 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:51.976560 | orchestrator | 2026-02-04 02:30:51.976573 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 02:30:51.976586 | orchestrator | Wednesday 04 February 2026 02:30:43 +0000 (0:00:00.929) 0:00:04.922 **** 2026-02-04 02:30:51.976599 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:51.976611 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:51.976623 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:51.976635 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:51.976648 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:51.976661 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:51.976673 | orchestrator | 2026-02-04 02:30:51.976686 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 02:30:51.976698 | orchestrator | Wednesday 04 February 2026 02:30:43 +0000 (0:00:00.794) 0:00:05.717 **** 2026-02-04 02:30:51.976711 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:51.976723 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:51.976735 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:51.976748 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:51.976761 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:51.976773 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:51.976785 | orchestrator | 2026-02-04 02:30:51.976798 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 02:30:51.976811 | orchestrator | Wednesday 04 February 2026 02:30:44 +0000 (0:00:00.664) 0:00:06.382 **** 2026-02-04 02:30:51.976824 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:51.976836 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:51.976847 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:51.976857 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:51.976868 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:51.976878 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:51.976889 | orchestrator | 2026-02-04 02:30:51.976900 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 02:30:51.976911 | orchestrator | Wednesday 04 February 2026 02:30:45 +0000 (0:00:00.826) 0:00:07.208 **** 2026-02-04 02:30:51.976921 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:30:51.976933 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:30:51.976944 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:30:51.976955 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:30:51.976965 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:30:51.976976 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:30:51.976990 | orchestrator | 2026-02-04 02:30:51.977009 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 02:30:51.977027 | orchestrator | Wednesday 04 February 2026 02:30:45 +0000 (0:00:00.595) 0:00:07.804 **** 2026-02-04 02:30:51.977047 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:51.977066 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:51.977083 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:51.977100 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:51.977111 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:51.977138 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:51.977149 | orchestrator | 2026-02-04 02:30:51.977161 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 02:30:51.977172 | orchestrator | Wednesday 04 February 2026 02:30:46 +0000 (0:00:00.842) 0:00:08.647 **** 2026-02-04 02:30:51.977183 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:30:51.977193 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:30:51.977204 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:30:51.977215 | orchestrator | 2026-02-04 02:30:51.977225 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 02:30:51.977236 | orchestrator | Wednesday 04 February 2026 02:30:47 +0000 (0:00:00.673) 0:00:09.321 **** 2026-02-04 02:30:51.977256 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:30:51.977267 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:30:51.977277 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:30:51.977306 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:30:51.977318 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:30:51.977329 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:30:51.977339 | orchestrator | 2026-02-04 02:30:51.977350 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 02:30:51.977361 | orchestrator | Wednesday 04 February 2026 02:30:48 +0000 (0:00:00.719) 0:00:10.041 **** 2026-02-04 02:30:51.977372 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:30:51.977387 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:30:51.977405 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:30:51.977421 | orchestrator | 2026-02-04 02:30:51.977440 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 02:30:51.977481 | orchestrator | Wednesday 04 February 2026 02:30:50 +0000 (0:00:02.352) 0:00:12.393 **** 2026-02-04 02:30:51.977503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 02:30:51.977516 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 02:30:51.977526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 02:30:51.977537 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:30:51.977548 | orchestrator | 2026-02-04 02:30:51.977559 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 02:30:51.977570 | orchestrator | Wednesday 04 February 2026 02:30:50 +0000 (0:00:00.414) 0:00:12.808 **** 2026-02-04 02:30:51.977583 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 02:30:51.977597 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 02:30:51.977609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 02:30:51.977620 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:30:51.977631 | orchestrator | 2026-02-04 02:30:51.977642 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 02:30:51.977652 | orchestrator | Wednesday 04 February 2026 02:30:51 +0000 (0:00:00.643) 0:00:13.452 **** 2026-02-04 02:30:51.977665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 02:30:51.977679 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 02:30:51.977691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 02:30:51.977710 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:30:51.977721 | orchestrator | 2026-02-04 02:30:51.977739 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 02:30:51.977750 | orchestrator | Wednesday 04 February 2026 02:30:51 +0000 (0:00:00.182) 0:00:13.635 **** 2026-02-04 02:30:51.977823 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 02:30:49.068500', 'end': '2026-02-04 02:30:49.126688', 'delta': '0:00:00.058188', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 02:31:01.981667 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 02:30:49.628696', 'end': '2026-02-04 02:30:49.676140', 'delta': '0:00:00.047444', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 02:31:01.981766 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 02:30:50.144293', 'end': '2026-02-04 02:30:50.190976', 'delta': '0:00:00.046683', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 02:31:01.981781 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.981793 | orchestrator | 2026-02-04 02:31:01.981804 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 02:31:01.981816 | orchestrator | Wednesday 04 February 2026 02:30:51 +0000 (0:00:00.191) 0:00:13.827 **** 2026-02-04 02:31:01.981825 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:01.981836 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:01.981845 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:01.981854 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:01.981864 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:01.981873 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:01.981883 | orchestrator | 2026-02-04 02:31:01.981893 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 02:31:01.981902 | orchestrator | Wednesday 04 February 2026 02:30:52 +0000 (0:00:00.791) 0:00:14.618 **** 2026-02-04 02:31:01.981911 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:31:01.981920 | orchestrator | 2026-02-04 02:31:01.981930 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 02:31:01.981939 | orchestrator | Wednesday 04 February 2026 02:30:53 +0000 (0:00:00.873) 0:00:15.492 **** 2026-02-04 02:31:01.981972 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.981981 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.981991 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.981999 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982008 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982066 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982075 | orchestrator | 2026-02-04 02:31:01.982081 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 02:31:01.982087 | orchestrator | Wednesday 04 February 2026 02:30:54 +0000 (0:00:00.828) 0:00:16.321 **** 2026-02-04 02:31:01.982093 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982098 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982104 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982110 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982117 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982123 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982130 | orchestrator | 2026-02-04 02:31:01.982136 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 02:31:01.982143 | orchestrator | Wednesday 04 February 2026 02:30:55 +0000 (0:00:01.230) 0:00:17.551 **** 2026-02-04 02:31:01.982150 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982197 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982204 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982211 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982217 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982234 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982241 | orchestrator | 2026-02-04 02:31:01.982248 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 02:31:01.982254 | orchestrator | Wednesday 04 February 2026 02:30:56 +0000 (0:00:00.629) 0:00:18.181 **** 2026-02-04 02:31:01.982260 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982266 | orchestrator | 2026-02-04 02:31:01.982275 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 02:31:01.982285 | orchestrator | Wednesday 04 February 2026 02:30:56 +0000 (0:00:00.172) 0:00:18.354 **** 2026-02-04 02:31:01.982294 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982303 | orchestrator | 2026-02-04 02:31:01.982312 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 02:31:01.982322 | orchestrator | Wednesday 04 February 2026 02:30:56 +0000 (0:00:00.219) 0:00:18.574 **** 2026-02-04 02:31:01.982331 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982339 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982347 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982355 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982363 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982372 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982381 | orchestrator | 2026-02-04 02:31:01.982408 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 02:31:01.982419 | orchestrator | Wednesday 04 February 2026 02:30:57 +0000 (0:00:00.778) 0:00:19.352 **** 2026-02-04 02:31:01.982427 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982437 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982464 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982475 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982485 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982492 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982499 | orchestrator | 2026-02-04 02:31:01.982505 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 02:31:01.982512 | orchestrator | Wednesday 04 February 2026 02:30:58 +0000 (0:00:00.604) 0:00:19.957 **** 2026-02-04 02:31:01.982518 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982525 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982531 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982544 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982550 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982555 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982561 | orchestrator | 2026-02-04 02:31:01.982566 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 02:31:01.982572 | orchestrator | Wednesday 04 February 2026 02:30:58 +0000 (0:00:00.802) 0:00:20.760 **** 2026-02-04 02:31:01.982577 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982582 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982588 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982593 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982602 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982610 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982619 | orchestrator | 2026-02-04 02:31:01.982628 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 02:31:01.982637 | orchestrator | Wednesday 04 February 2026 02:30:59 +0000 (0:00:00.663) 0:00:21.423 **** 2026-02-04 02:31:01.982646 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982655 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982664 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982673 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982683 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982691 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982701 | orchestrator | 2026-02-04 02:31:01.982707 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 02:31:01.982713 | orchestrator | Wednesday 04 February 2026 02:31:00 +0000 (0:00:00.824) 0:00:22.248 **** 2026-02-04 02:31:01.982718 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982724 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982729 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982734 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982740 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982745 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982750 | orchestrator | 2026-02-04 02:31:01.982756 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 02:31:01.982762 | orchestrator | Wednesday 04 February 2026 02:31:01 +0000 (0:00:00.634) 0:00:22.882 **** 2026-02-04 02:31:01.982767 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:01.982773 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:01.982778 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:01.982783 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:01.982789 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:01.982798 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:01.982807 | orchestrator | 2026-02-04 02:31:01.982816 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 02:31:01.982825 | orchestrator | Wednesday 04 February 2026 02:31:01 +0000 (0:00:00.842) 0:00:23.724 **** 2026-02-04 02:31:01.982837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f', 'dm-uuid-LVM-8XaWcwBldrFACyhn8O8pDrkh8WYfwfMh8YdRgn42SXPKkSSmdqnloX2coya2uTEh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:01.982854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e', 'dm-uuid-LVM-BggcAryejjvGBF4uvp6BcYG8cW5k2lInqXUvcrL0euXIKDnaXO5lD17ef9ulmfzT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:01.982877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.120645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.120776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.120802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.120822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.120840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.120860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.120880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.120984 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.121043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c', 'dm-uuid-LVM-jabOFLmF8RS1U4YRftNuTtdThdIFxea35ctI13zu0z0FRbKQORFQtA0W3pu2nuf0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.121059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LUqg5q-XQXl-4J84-Fu4r-xNUp-Z07d-jQvh8Z', 'scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388', 'scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.121071 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843', 'dm-uuid-LVM-GuQppvMqMgPM92HHdmch1RUlEtgMK7bAQGkZWEBmxgWBBqnmby4j6kn1XrU8W6rj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.121088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PkP1x1-WFQe-TRGf-2R1c-oEQv-Qw43-IKwaXF', 'scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40', 'scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.121131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.228978 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811', 'scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.229107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-19-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.229136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.229157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.229178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.229196 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.229269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.229293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.229356 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.229400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.229426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lVamx9-eYv9-88F9-1eWN-Mo2X-ZvoC-DQM8Qk', 'scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536', 'scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.229538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af', 'dm-uuid-LVM-jfhjIQs9I12AbVZ4uHpbas8Q8DuoJ56eVvgnpRveGHUC1VWvw0UeAndBY1g45KfH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.229577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Bwhrb-Xrjl-JUvU-1GoK-f7aN-SV93-uYzfRx', 'scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd', 'scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.383047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639', 'dm-uuid-LVM-vz2cv2RninoOpnjrAP98IcdUAgz3XBEESK6kemILvNkP1xNIipyazKS9tR60DcmG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23', 'scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.383193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.383269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383310 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383371 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:02.383416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.383523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.383559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Zb3vde-Jb13-PnWs-XBLv-pqCq-xraX-sEUQHY', 'scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675', 'scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.383598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2LO7pB-3JRT-gNDG-CXHX-CXgP-r5lI-kGILdq', 'scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52', 'scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.611132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b', 'scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.611217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.611250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.611361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.611369 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:02.611378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.611397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.843776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:02.843807 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:02.843820 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:02.843831 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:02.843843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:02.843938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:31:03.049100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:03.049213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:31:03.049232 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:03.049246 | orchestrator | 2026-02-04 02:31:03.049259 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 02:31:03.049272 | orchestrator | Wednesday 04 February 2026 02:31:02 +0000 (0:00:00.963) 0:00:24.688 **** 2026-02-04 02:31:03.049285 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f', 'dm-uuid-LVM-8XaWcwBldrFACyhn8O8pDrkh8WYfwfMh8YdRgn42SXPKkSSmdqnloX2coya2uTEh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.049338 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e', 'dm-uuid-LVM-BggcAryejjvGBF4uvp6BcYG8cW5k2lInqXUvcrL0euXIKDnaXO5lD17ef9ulmfzT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.049352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.049366 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.049384 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.049396 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.049408 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c', 'dm-uuid-LVM-jabOFLmF8RS1U4YRftNuTtdThdIFxea35ctI13zu0z0FRbKQORFQtA0W3pu2nuf0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.049433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108244 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843', 'dm-uuid-LVM-GuQppvMqMgPM92HHdmch1RUlEtgMK7bAQGkZWEBmxgWBBqnmby4j6kn1XrU8W6rj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108371 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108515 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.108552 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LUqg5q-XQXl-4J84-Fu4r-xNUp-Z07d-jQvh8Z', 'scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388', 'scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431135 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PkP1x1-WFQe-TRGf-2R1c-oEQv-Qw43-IKwaXF', 'scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40', 'scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431264 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431277 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811', 'scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-19-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431348 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431368 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.431396 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lVamx9-eYv9-88F9-1eWN-Mo2X-ZvoC-DQM8Qk', 'scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536', 'scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567686 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Bwhrb-Xrjl-JUvU-1GoK-f7aN-SV93-uYzfRx', 'scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd', 'scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567797 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23', 'scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567812 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567841 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af', 'dm-uuid-LVM-jfhjIQs9I12AbVZ4uHpbas8Q8DuoJ56eVvgnpRveGHUC1VWvw0UeAndBY1g45KfH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639', 'dm-uuid-LVM-vz2cv2RninoOpnjrAP98IcdUAgz3XBEESK6kemILvNkP1xNIipyazKS9tR60DcmG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567877 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567888 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567912 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.567952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.706878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.706979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.707009 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Zb3vde-Jb13-PnWs-XBLv-pqCq-xraX-sEUQHY', 'scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675', 'scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.707033 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2LO7pB-3JRT-gNDG-CXHX-CXgP-r5lI-kGILdq', 'scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52', 'scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.707042 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b', 'scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.707051 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:03.707061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.707076 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.707084 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.707097 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.932772 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.932908 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.932934 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.932984 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.933005 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.933068 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.933093 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.933131 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:03.933153 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.933173 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.933193 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:03.933224 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155061 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155186 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155240 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155349 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155407 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155443 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155559 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:04.155584 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:04.155605 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:04.155627 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155751 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155768 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155783 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:04.155813 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:10.397563 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:10.397665 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:10.397678 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:10.397706 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:10.397748 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:31:10.397759 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:10.397769 | orchestrator | 2026-02-04 02:31:10.397779 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 02:31:10.397788 | orchestrator | Wednesday 04 February 2026 02:31:04 +0000 (0:00:01.318) 0:00:26.007 **** 2026-02-04 02:31:10.397796 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:10.397804 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:10.397812 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:10.397820 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:10.397827 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:10.397835 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:10.397843 | orchestrator | 2026-02-04 02:31:10.397851 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 02:31:10.397859 | orchestrator | Wednesday 04 February 2026 02:31:05 +0000 (0:00:00.930) 0:00:26.937 **** 2026-02-04 02:31:10.397867 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:10.397875 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:10.397882 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:10.397890 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:10.397898 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:10.397905 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:10.397913 | orchestrator | 2026-02-04 02:31:10.397921 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 02:31:10.397929 | orchestrator | Wednesday 04 February 2026 02:31:05 +0000 (0:00:00.827) 0:00:27.765 **** 2026-02-04 02:31:10.397937 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:10.397945 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:10.397953 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:10.397960 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:10.397968 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:10.397976 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:10.397984 | orchestrator | 2026-02-04 02:31:10.397992 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 02:31:10.398000 | orchestrator | Wednesday 04 February 2026 02:31:06 +0000 (0:00:00.589) 0:00:28.355 **** 2026-02-04 02:31:10.398008 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:10.398071 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:10.398082 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:10.398091 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:10.398100 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:10.398110 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:10.398119 | orchestrator | 2026-02-04 02:31:10.398129 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 02:31:10.398138 | orchestrator | Wednesday 04 February 2026 02:31:07 +0000 (0:00:00.810) 0:00:29.165 **** 2026-02-04 02:31:10.398148 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:10.398157 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:10.398166 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:10.398183 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:10.398192 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:10.398202 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:10.398211 | orchestrator | 2026-02-04 02:31:10.398220 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 02:31:10.398228 | orchestrator | Wednesday 04 February 2026 02:31:07 +0000 (0:00:00.645) 0:00:29.811 **** 2026-02-04 02:31:10.398236 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:10.398244 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:10.398252 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:10.398260 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:10.398267 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:10.398275 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:10.398283 | orchestrator | 2026-02-04 02:31:10.398291 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 02:31:10.398299 | orchestrator | Wednesday 04 February 2026 02:31:08 +0000 (0:00:00.843) 0:00:30.654 **** 2026-02-04 02:31:10.398307 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-04 02:31:10.398316 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-04 02:31:10.398324 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-04 02:31:10.398331 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-04 02:31:10.398339 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-04 02:31:10.398347 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-04 02:31:10.398355 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-04 02:31:10.398363 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 02:31:10.398371 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-04 02:31:10.398378 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-04 02:31:10.398386 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-04 02:31:10.398394 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-04 02:31:10.398402 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 02:31:10.398410 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-04 02:31:10.398424 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-04 02:31:24.773708 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 02:31:24.773823 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-04 02:31:24.773868 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-04 02:31:24.773890 | orchestrator | 2026-02-04 02:31:24.773930 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 02:31:24.773952 | orchestrator | Wednesday 04 February 2026 02:31:10 +0000 (0:00:01.594) 0:00:32.248 **** 2026-02-04 02:31:24.773973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 02:31:24.773993 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 02:31:24.774011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 02:31:24.774089 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:24.774101 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 02:31:24.774112 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 02:31:24.774123 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 02:31:24.774168 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:24.774181 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 02:31:24.774192 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 02:31:24.774203 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 02:31:24.774215 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:24.774228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 02:31:24.774246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 02:31:24.774298 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 02:31:24.774320 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:24.774339 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 02:31:24.774359 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 02:31:24.774373 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 02:31:24.774386 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:24.774398 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 02:31:24.774412 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 02:31:24.774426 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 02:31:24.774438 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:24.774451 | orchestrator | 2026-02-04 02:31:24.774497 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 02:31:24.774510 | orchestrator | Wednesday 04 February 2026 02:31:11 +0000 (0:00:00.934) 0:00:33.183 **** 2026-02-04 02:31:24.774523 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:24.774536 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:24.774548 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:24.774560 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:31:24.774572 | orchestrator | 2026-02-04 02:31:24.774583 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 02:31:24.774596 | orchestrator | Wednesday 04 February 2026 02:31:12 +0000 (0:00:01.023) 0:00:34.206 **** 2026-02-04 02:31:24.774607 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:24.774623 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:24.774642 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:24.774660 | orchestrator | 2026-02-04 02:31:24.774678 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 02:31:24.774697 | orchestrator | Wednesday 04 February 2026 02:31:12 +0000 (0:00:00.367) 0:00:34.573 **** 2026-02-04 02:31:24.774716 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:24.774735 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:24.774753 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:24.774772 | orchestrator | 2026-02-04 02:31:24.774784 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 02:31:24.774795 | orchestrator | Wednesday 04 February 2026 02:31:13 +0000 (0:00:00.331) 0:00:34.905 **** 2026-02-04 02:31:24.774806 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:24.774817 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:24.774828 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:24.774838 | orchestrator | 2026-02-04 02:31:24.774849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 02:31:24.774860 | orchestrator | Wednesday 04 February 2026 02:31:13 +0000 (0:00:00.318) 0:00:35.224 **** 2026-02-04 02:31:24.774871 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:24.774897 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:24.774908 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:24.774918 | orchestrator | 2026-02-04 02:31:24.774929 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 02:31:24.774940 | orchestrator | Wednesday 04 February 2026 02:31:14 +0000 (0:00:00.712) 0:00:35.936 **** 2026-02-04 02:31:24.774951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:31:24.774962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:31:24.774973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:31:24.774987 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:24.775007 | orchestrator | 2026-02-04 02:31:24.775026 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 02:31:24.775060 | orchestrator | Wednesday 04 February 2026 02:31:14 +0000 (0:00:00.397) 0:00:36.333 **** 2026-02-04 02:31:24.775081 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:31:24.775101 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:31:24.775121 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:31:24.775141 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:24.775158 | orchestrator | 2026-02-04 02:31:24.775217 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 02:31:24.775242 | orchestrator | Wednesday 04 February 2026 02:31:14 +0000 (0:00:00.418) 0:00:36.752 **** 2026-02-04 02:31:24.775273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:31:24.775288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:31:24.775299 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:31:24.775310 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:24.775321 | orchestrator | 2026-02-04 02:31:24.775332 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 02:31:24.775343 | orchestrator | Wednesday 04 February 2026 02:31:15 +0000 (0:00:00.411) 0:00:37.164 **** 2026-02-04 02:31:24.775358 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:24.775377 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:24.775395 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:24.775413 | orchestrator | 2026-02-04 02:31:24.775432 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 02:31:24.775450 | orchestrator | Wednesday 04 February 2026 02:31:15 +0000 (0:00:00.347) 0:00:37.512 **** 2026-02-04 02:31:24.775514 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 02:31:24.775532 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 02:31:24.775543 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 02:31:24.775554 | orchestrator | 2026-02-04 02:31:24.775565 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 02:31:24.775576 | orchestrator | Wednesday 04 February 2026 02:31:16 +0000 (0:00:00.994) 0:00:38.507 **** 2026-02-04 02:31:24.775587 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:31:24.775599 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:31:24.775610 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:31:24.775621 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 02:31:24.775632 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 02:31:24.775643 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 02:31:24.775654 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 02:31:24.775665 | orchestrator | 2026-02-04 02:31:24.775675 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 02:31:24.775686 | orchestrator | Wednesday 04 February 2026 02:31:17 +0000 (0:00:00.823) 0:00:39.330 **** 2026-02-04 02:31:24.775697 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:31:24.775708 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:31:24.775719 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:31:24.775737 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 02:31:24.775756 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 02:31:24.775774 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 02:31:24.775793 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 02:31:24.775811 | orchestrator | 2026-02-04 02:31:24.775830 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 02:31:24.775862 | orchestrator | Wednesday 04 February 2026 02:31:19 +0000 (0:00:01.913) 0:00:41.244 **** 2026-02-04 02:31:24.775883 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:31:24.775897 | orchestrator | 2026-02-04 02:31:24.775908 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 02:31:24.775919 | orchestrator | Wednesday 04 February 2026 02:31:20 +0000 (0:00:01.219) 0:00:42.463 **** 2026-02-04 02:31:24.775930 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:31:24.775940 | orchestrator | 2026-02-04 02:31:24.775951 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 02:31:24.775962 | orchestrator | Wednesday 04 February 2026 02:31:21 +0000 (0:00:01.277) 0:00:43.741 **** 2026-02-04 02:31:24.775973 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:24.775984 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:24.775994 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:24.776005 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:24.776016 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:24.776027 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:24.776038 | orchestrator | 2026-02-04 02:31:24.776049 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 02:31:24.776060 | orchestrator | Wednesday 04 February 2026 02:31:23 +0000 (0:00:01.306) 0:00:45.048 **** 2026-02-04 02:31:24.776070 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:24.776129 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:24.776150 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:24.776169 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:24.776186 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:24.776206 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:24.776224 | orchestrator | 2026-02-04 02:31:24.776243 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 02:31:24.776255 | orchestrator | Wednesday 04 February 2026 02:31:23 +0000 (0:00:00.707) 0:00:45.755 **** 2026-02-04 02:31:24.776266 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:24.776277 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:24.776299 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.431566 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.431702 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.431730 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.431750 | orchestrator | 2026-02-04 02:31:46.431788 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 02:31:46.431808 | orchestrator | Wednesday 04 February 2026 02:31:24 +0000 (0:00:00.869) 0:00:46.624 **** 2026-02-04 02:31:46.431825 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.431844 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.431862 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:46.431880 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:46.431898 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.431917 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.431935 | orchestrator | 2026-02-04 02:31:46.431954 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 02:31:46.431971 | orchestrator | Wednesday 04 February 2026 02:31:25 +0000 (0:00:00.724) 0:00:47.349 **** 2026-02-04 02:31:46.431989 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.432008 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.432027 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.432044 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:46.432064 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:46.432082 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:46.432099 | orchestrator | 2026-02-04 02:31:46.432118 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 02:31:46.432170 | orchestrator | Wednesday 04 February 2026 02:31:26 +0000 (0:00:01.220) 0:00:48.570 **** 2026-02-04 02:31:46.432190 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.432209 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.432227 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.432245 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.432264 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.432283 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.432301 | orchestrator | 2026-02-04 02:31:46.432319 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 02:31:46.432337 | orchestrator | Wednesday 04 February 2026 02:31:27 +0000 (0:00:00.656) 0:00:49.226 **** 2026-02-04 02:31:46.432355 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.432374 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.432391 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.432407 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.432425 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.432443 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.432560 | orchestrator | 2026-02-04 02:31:46.432584 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 02:31:46.432600 | orchestrator | Wednesday 04 February 2026 02:31:28 +0000 (0:00:00.841) 0:00:50.067 **** 2026-02-04 02:31:46.432616 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:46.432633 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:46.432650 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.432668 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:46.432684 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:46.432700 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:46.432716 | orchestrator | 2026-02-04 02:31:46.432733 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 02:31:46.432750 | orchestrator | Wednesday 04 February 2026 02:31:29 +0000 (0:00:01.029) 0:00:51.097 **** 2026-02-04 02:31:46.432767 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:46.432783 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:46.432800 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.432816 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:46.432832 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:46.432849 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:46.432866 | orchestrator | 2026-02-04 02:31:46.432883 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 02:31:46.432899 | orchestrator | Wednesday 04 February 2026 02:31:30 +0000 (0:00:01.345) 0:00:52.443 **** 2026-02-04 02:31:46.432915 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.432932 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.432949 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.432966 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.432984 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.433001 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.433034 | orchestrator | 2026-02-04 02:31:46.433051 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 02:31:46.433067 | orchestrator | Wednesday 04 February 2026 02:31:31 +0000 (0:00:00.618) 0:00:53.061 **** 2026-02-04 02:31:46.433084 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.433102 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.433119 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.433135 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:46.433151 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:46.433167 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:46.433184 | orchestrator | 2026-02-04 02:31:46.433201 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 02:31:46.433219 | orchestrator | Wednesday 04 February 2026 02:31:32 +0000 (0:00:00.858) 0:00:53.920 **** 2026-02-04 02:31:46.433235 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:46.433250 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:46.433280 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.433297 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.433315 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.433332 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.433348 | orchestrator | 2026-02-04 02:31:46.433364 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 02:31:46.433381 | orchestrator | Wednesday 04 February 2026 02:31:32 +0000 (0:00:00.624) 0:00:54.544 **** 2026-02-04 02:31:46.433397 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:46.433412 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:46.433428 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.433444 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.433481 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.433499 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.433515 | orchestrator | 2026-02-04 02:31:46.433531 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 02:31:46.433548 | orchestrator | Wednesday 04 February 2026 02:31:33 +0000 (0:00:00.811) 0:00:55.356 **** 2026-02-04 02:31:46.433565 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:46.433581 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:46.433624 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.433641 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.433658 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.433683 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.433700 | orchestrator | 2026-02-04 02:31:46.433718 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 02:31:46.433733 | orchestrator | Wednesday 04 February 2026 02:31:34 +0000 (0:00:00.608) 0:00:55.964 **** 2026-02-04 02:31:46.433749 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.433766 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.433781 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.433797 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.433815 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.433830 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.433847 | orchestrator | 2026-02-04 02:31:46.433864 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 02:31:46.433879 | orchestrator | Wednesday 04 February 2026 02:31:34 +0000 (0:00:00.838) 0:00:56.802 **** 2026-02-04 02:31:46.433896 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.433912 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.433929 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.433946 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.433962 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.433978 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.433993 | orchestrator | 2026-02-04 02:31:46.434009 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 02:31:46.434109 | orchestrator | Wednesday 04 February 2026 02:31:35 +0000 (0:00:00.668) 0:00:57.471 **** 2026-02-04 02:31:46.434126 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.434139 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.434152 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.434166 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:46.434179 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:46.434193 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:46.434206 | orchestrator | 2026-02-04 02:31:46.434220 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 02:31:46.434234 | orchestrator | Wednesday 04 February 2026 02:31:36 +0000 (0:00:00.846) 0:00:58.317 **** 2026-02-04 02:31:46.434249 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:46.434263 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:46.434277 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.434290 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:46.434305 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:46.434318 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:46.434346 | orchestrator | 2026-02-04 02:31:46.434361 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 02:31:46.434374 | orchestrator | Wednesday 04 February 2026 02:31:37 +0000 (0:00:00.630) 0:00:58.947 **** 2026-02-04 02:31:46.434388 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:31:46.434402 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:31:46.434416 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:31:46.434430 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:31:46.434443 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:31:46.434477 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:31:46.434492 | orchestrator | 2026-02-04 02:31:46.434505 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-04 02:31:46.434519 | orchestrator | Wednesday 04 February 2026 02:31:38 +0000 (0:00:01.315) 0:01:00.263 **** 2026-02-04 02:31:46.434533 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:31:46.434546 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:31:46.434559 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:31:46.434572 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:31:46.434586 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:31:46.434599 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:31:46.434612 | orchestrator | 2026-02-04 02:31:46.434626 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-04 02:31:46.434639 | orchestrator | Wednesday 04 February 2026 02:31:40 +0000 (0:00:01.699) 0:01:01.962 **** 2026-02-04 02:31:46.434653 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:31:46.434666 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:31:46.434679 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:31:46.434692 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:31:46.434705 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:31:46.434718 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:31:46.434731 | orchestrator | 2026-02-04 02:31:46.434744 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-04 02:31:46.434757 | orchestrator | Wednesday 04 February 2026 02:31:42 +0000 (0:00:02.370) 0:01:04.332 **** 2026-02-04 02:31:46.434772 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:31:46.434787 | orchestrator | 2026-02-04 02:31:46.434800 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-04 02:31:46.434814 | orchestrator | Wednesday 04 February 2026 02:31:43 +0000 (0:00:01.210) 0:01:05.543 **** 2026-02-04 02:31:46.434827 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.434841 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.434854 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.434867 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.434880 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.434894 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.434906 | orchestrator | 2026-02-04 02:31:46.434920 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-04 02:31:46.434933 | orchestrator | Wednesday 04 February 2026 02:31:44 +0000 (0:00:00.637) 0:01:06.181 **** 2026-02-04 02:31:46.434946 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:31:46.434957 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:31:46.434969 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:31:46.434981 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:31:46.434994 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:31:46.435006 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:31:46.435017 | orchestrator | 2026-02-04 02:31:46.435030 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-04 02:31:46.435043 | orchestrator | Wednesday 04 February 2026 02:31:45 +0000 (0:00:00.822) 0:01:07.003 **** 2026-02-04 02:31:46.435072 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 02:32:56.466082 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 02:32:56.466209 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 02:32:56.466220 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 02:32:56.466227 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 02:32:56.466235 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 02:32:56.466241 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 02:32:56.466249 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 02:32:56.466257 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 02:32:56.466264 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 02:32:56.466271 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 02:32:56.466278 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 02:32:56.466286 | orchestrator | 2026-02-04 02:32:56.466293 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-04 02:32:56.466300 | orchestrator | Wednesday 04 February 2026 02:31:46 +0000 (0:00:01.280) 0:01:08.284 **** 2026-02-04 02:32:56.466307 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:32:56.466316 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:32:56.466322 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:32:56.466329 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:32:56.466335 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:32:56.466343 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:32:56.466350 | orchestrator | 2026-02-04 02:32:56.466357 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-04 02:32:56.466363 | orchestrator | Wednesday 04 February 2026 02:31:47 +0000 (0:00:01.126) 0:01:09.410 **** 2026-02-04 02:32:56.466370 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.466376 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.466383 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:32:56.466390 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:32:56.466397 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:32:56.466403 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:32:56.466410 | orchestrator | 2026-02-04 02:32:56.466417 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-04 02:32:56.466424 | orchestrator | Wednesday 04 February 2026 02:31:48 +0000 (0:00:00.684) 0:01:10.094 **** 2026-02-04 02:32:56.466431 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.466438 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.466444 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:32:56.466451 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:32:56.466458 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:32:56.466487 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:32:56.466494 | orchestrator | 2026-02-04 02:32:56.466500 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-04 02:32:56.466508 | orchestrator | Wednesday 04 February 2026 02:31:49 +0000 (0:00:00.818) 0:01:10.913 **** 2026-02-04 02:32:56.466515 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.466522 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.466529 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:32:56.466537 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:32:56.466543 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:32:56.466550 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:32:56.466557 | orchestrator | 2026-02-04 02:32:56.466564 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-04 02:32:56.466572 | orchestrator | Wednesday 04 February 2026 02:31:49 +0000 (0:00:00.585) 0:01:11.499 **** 2026-02-04 02:32:56.466588 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:32:56.466597 | orchestrator | 2026-02-04 02:32:56.466605 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-04 02:32:56.466614 | orchestrator | Wednesday 04 February 2026 02:31:50 +0000 (0:00:01.225) 0:01:12.724 **** 2026-02-04 02:32:56.466621 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:32:56.466629 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:32:56.466637 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:32:56.466644 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:32:56.466651 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:32:56.466660 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:32:56.466667 | orchestrator | 2026-02-04 02:32:56.466675 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-04 02:32:56.466683 | orchestrator | Wednesday 04 February 2026 02:32:46 +0000 (0:00:55.792) 0:02:08.517 **** 2026-02-04 02:32:56.466691 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 02:32:56.466698 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 02:32:56.466706 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 02:32:56.466714 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.466722 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 02:32:56.466730 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 02:32:56.466737 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 02:32:56.466745 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.466753 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 02:32:56.466779 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 02:32:56.466796 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 02:32:56.466803 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:32:56.466810 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 02:32:56.466818 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 02:32:56.466825 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 02:32:56.466832 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:32:56.466841 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 02:32:56.466848 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 02:32:56.466856 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 02:32:56.466864 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:32:56.466871 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 02:32:56.466880 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 02:32:56.466886 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 02:32:56.466891 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:32:56.466897 | orchestrator | 2026-02-04 02:32:56.466902 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-04 02:32:56.466908 | orchestrator | Wednesday 04 February 2026 02:32:47 +0000 (0:00:00.711) 0:02:09.229 **** 2026-02-04 02:32:56.466913 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.466918 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.466923 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:32:56.466928 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:32:56.466932 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:32:56.466944 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:32:56.466948 | orchestrator | 2026-02-04 02:32:56.466953 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-04 02:32:56.466957 | orchestrator | Wednesday 04 February 2026 02:32:48 +0000 (0:00:00.828) 0:02:10.057 **** 2026-02-04 02:32:56.466962 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.466966 | orchestrator | 2026-02-04 02:32:56.466971 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-04 02:32:56.466976 | orchestrator | Wednesday 04 February 2026 02:32:48 +0000 (0:00:00.153) 0:02:10.211 **** 2026-02-04 02:32:56.466980 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.466985 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.466989 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:32:56.466994 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:32:56.466998 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:32:56.467003 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:32:56.467008 | orchestrator | 2026-02-04 02:32:56.467012 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-04 02:32:56.467017 | orchestrator | Wednesday 04 February 2026 02:32:48 +0000 (0:00:00.620) 0:02:10.831 **** 2026-02-04 02:32:56.467021 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.467026 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.467030 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:32:56.467035 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:32:56.467039 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:32:56.467044 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:32:56.467048 | orchestrator | 2026-02-04 02:32:56.467053 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-04 02:32:56.467058 | orchestrator | Wednesday 04 February 2026 02:32:49 +0000 (0:00:00.842) 0:02:11.673 **** 2026-02-04 02:32:56.467062 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.467067 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.467071 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:32:56.467076 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:32:56.467080 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:32:56.467085 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:32:56.467090 | orchestrator | 2026-02-04 02:32:56.467094 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-04 02:32:56.467099 | orchestrator | Wednesday 04 February 2026 02:32:50 +0000 (0:00:00.623) 0:02:12.296 **** 2026-02-04 02:32:56.467103 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:32:56.467108 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:32:56.467112 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:32:56.467117 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:32:56.467122 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:32:56.467126 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:32:56.467131 | orchestrator | 2026-02-04 02:32:56.467135 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-04 02:32:56.467140 | orchestrator | Wednesday 04 February 2026 02:32:53 +0000 (0:00:03.559) 0:02:15.856 **** 2026-02-04 02:32:56.467145 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:32:56.467149 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:32:56.467154 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:32:56.467158 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:32:56.467163 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:32:56.467167 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:32:56.467172 | orchestrator | 2026-02-04 02:32:56.467176 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-04 02:32:56.467181 | orchestrator | Wednesday 04 February 2026 02:32:54 +0000 (0:00:00.611) 0:02:16.467 **** 2026-02-04 02:32:56.467187 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:32:56.467193 | orchestrator | 2026-02-04 02:32:56.467197 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-04 02:32:56.467205 | orchestrator | Wednesday 04 February 2026 02:32:55 +0000 (0:00:01.256) 0:02:17.724 **** 2026-02-04 02:32:56.467210 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:32:56.467214 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:32:56.467223 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:10.847035 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:10.847144 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:10.847154 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:10.847161 | orchestrator | 2026-02-04 02:33:10.847168 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-04 02:33:10.847176 | orchestrator | Wednesday 04 February 2026 02:32:56 +0000 (0:00:00.848) 0:02:18.572 **** 2026-02-04 02:33:10.847182 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:10.847188 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:10.847206 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:10.847213 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:10.847219 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:10.847225 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:10.847231 | orchestrator | 2026-02-04 02:33:10.847237 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-04 02:33:10.847243 | orchestrator | Wednesday 04 February 2026 02:32:57 +0000 (0:00:00.633) 0:02:19.206 **** 2026-02-04 02:33:10.847257 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:10.847263 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:10.847269 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:10.847275 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:10.847281 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:10.847287 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:10.847293 | orchestrator | 2026-02-04 02:33:10.847301 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-04 02:33:10.847311 | orchestrator | Wednesday 04 February 2026 02:32:58 +0000 (0:00:00.857) 0:02:20.063 **** 2026-02-04 02:33:10.847326 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:10.847335 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:10.847345 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:10.847354 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:10.847364 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:10.847372 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:10.847381 | orchestrator | 2026-02-04 02:33:10.847391 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-04 02:33:10.847400 | orchestrator | Wednesday 04 February 2026 02:32:58 +0000 (0:00:00.659) 0:02:20.723 **** 2026-02-04 02:33:10.847410 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:10.847420 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:10.847435 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:10.847445 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:10.847455 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:10.847486 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:10.847494 | orchestrator | 2026-02-04 02:33:10.847504 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-04 02:33:10.847514 | orchestrator | Wednesday 04 February 2026 02:32:59 +0000 (0:00:00.915) 0:02:21.639 **** 2026-02-04 02:33:10.847523 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:10.847533 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:10.847543 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:10.847552 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:10.847562 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:10.847572 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:10.847583 | orchestrator | 2026-02-04 02:33:10.847593 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-04 02:33:10.847605 | orchestrator | Wednesday 04 February 2026 02:33:00 +0000 (0:00:00.629) 0:02:22.268 **** 2026-02-04 02:33:10.847639 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:10.847646 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:10.847654 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:10.847662 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:10.847672 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:10.847688 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:10.847699 | orchestrator | 2026-02-04 02:33:10.847709 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-04 02:33:10.847718 | orchestrator | Wednesday 04 February 2026 02:33:01 +0000 (0:00:00.864) 0:02:23.133 **** 2026-02-04 02:33:10.847727 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:10.847735 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:10.847743 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:10.847752 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:10.847761 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:10.847770 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:10.847780 | orchestrator | 2026-02-04 02:33:10.847790 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-04 02:33:10.847801 | orchestrator | Wednesday 04 February 2026 02:33:01 +0000 (0:00:00.624) 0:02:23.757 **** 2026-02-04 02:33:10.847812 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:10.847823 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:10.847834 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:10.847844 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:33:10.847855 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:33:10.847865 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:33:10.847875 | orchestrator | 2026-02-04 02:33:10.847886 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-04 02:33:10.847896 | orchestrator | Wednesday 04 February 2026 02:33:03 +0000 (0:00:01.291) 0:02:25.048 **** 2026-02-04 02:33:10.847908 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:33:10.847919 | orchestrator | 2026-02-04 02:33:10.847930 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-04 02:33:10.847940 | orchestrator | Wednesday 04 February 2026 02:33:04 +0000 (0:00:01.278) 0:02:26.326 **** 2026-02-04 02:33:10.847950 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-04 02:33:10.847959 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-04 02:33:10.847965 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-04 02:33:10.847970 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-04 02:33:10.847976 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-04 02:33:10.847983 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-04 02:33:10.848015 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-04 02:33:10.848037 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-04 02:33:10.848046 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-04 02:33:10.848055 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-04 02:33:10.848065 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-04 02:33:10.848073 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-04 02:33:10.848082 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-04 02:33:10.848091 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-04 02:33:10.848099 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-04 02:33:10.848107 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-04 02:33:10.848117 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-04 02:33:10.848125 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-04 02:33:10.848134 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-04 02:33:10.848179 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-04 02:33:10.848187 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-04 02:33:10.848193 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-04 02:33:10.848198 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-04 02:33:10.848205 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-04 02:33:10.848210 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-04 02:33:10.848216 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-04 02:33:10.848222 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-04 02:33:10.848228 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-04 02:33:10.848233 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-04 02:33:10.848239 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-04 02:33:10.848245 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-04 02:33:10.848251 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-04 02:33:10.848257 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-04 02:33:10.848262 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-04 02:33:10.848268 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-04 02:33:10.848274 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-04 02:33:10.848280 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-04 02:33:10.848286 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-04 02:33:10.848291 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-04 02:33:10.848299 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-04 02:33:10.848309 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-04 02:33:10.848324 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 02:33:10.848334 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-04 02:33:10.848343 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-04 02:33:10.848353 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-04 02:33:10.848362 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-04 02:33:10.848371 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-04 02:33:10.848380 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 02:33:10.848390 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-04 02:33:10.848399 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 02:33:10.848409 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 02:33:10.848417 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-04 02:33:10.848422 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 02:33:10.848428 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 02:33:10.848434 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 02:33:10.848440 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 02:33:10.848445 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 02:33:10.848451 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 02:33:10.848457 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 02:33:10.848528 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 02:33:10.848534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 02:33:10.848547 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 02:33:10.848553 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 02:33:10.848563 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 02:33:10.848577 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 02:33:10.848588 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 02:33:10.848607 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 02:33:24.255064 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 02:33:24.255180 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 02:33:24.255197 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 02:33:24.255210 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 02:33:24.255221 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 02:33:24.255233 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 02:33:24.255244 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 02:33:24.255255 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 02:33:24.255267 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 02:33:24.255278 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 02:33:24.255290 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 02:33:24.255302 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 02:33:24.255313 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 02:33:24.255324 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 02:33:24.255337 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-04 02:33:24.255349 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 02:33:24.255361 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-04 02:33:24.255373 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-04 02:33:24.255384 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 02:33:24.255395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 02:33:24.255406 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-04 02:33:24.255418 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-04 02:33:24.255429 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-04 02:33:24.255440 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-04 02:33:24.255451 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-04 02:33:24.255494 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-04 02:33:24.255506 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-04 02:33:24.255517 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-04 02:33:24.255529 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-04 02:33:24.255540 | orchestrator | 2026-02-04 02:33:24.255552 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-04 02:33:24.255563 | orchestrator | Wednesday 04 February 2026 02:33:10 +0000 (0:00:06.336) 0:02:32.663 **** 2026-02-04 02:33:24.255575 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.255588 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.255601 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.255615 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:33:24.255651 | orchestrator | 2026-02-04 02:33:24.255664 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-04 02:33:24.255678 | orchestrator | Wednesday 04 February 2026 02:33:11 +0000 (0:00:01.033) 0:02:33.696 **** 2026-02-04 02:33:24.255690 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:24.255704 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:24.255718 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:24.255731 | orchestrator | 2026-02-04 02:33:24.255744 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-04 02:33:24.255757 | orchestrator | Wednesday 04 February 2026 02:33:12 +0000 (0:00:00.686) 0:02:34.382 **** 2026-02-04 02:33:24.255769 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:24.255782 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:24.255796 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:24.255808 | orchestrator | 2026-02-04 02:33:24.255820 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-04 02:33:24.255834 | orchestrator | Wednesday 04 February 2026 02:33:13 +0000 (0:00:01.190) 0:02:35.573 **** 2026-02-04 02:33:24.255847 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:24.255860 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:24.255872 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:24.255884 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.255897 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.255910 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.255923 | orchestrator | 2026-02-04 02:33:24.255936 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-04 02:33:24.255975 | orchestrator | Wednesday 04 February 2026 02:33:14 +0000 (0:00:00.850) 0:02:36.423 **** 2026-02-04 02:33:24.255988 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:24.255999 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:24.256010 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:24.256020 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256032 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256042 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256053 | orchestrator | 2026-02-04 02:33:24.256065 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-04 02:33:24.256076 | orchestrator | Wednesday 04 February 2026 02:33:15 +0000 (0:00:00.633) 0:02:37.057 **** 2026-02-04 02:33:24.256087 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:24.256098 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:24.256109 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:24.256120 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256131 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256142 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256153 | orchestrator | 2026-02-04 02:33:24.256164 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-04 02:33:24.256175 | orchestrator | Wednesday 04 February 2026 02:33:16 +0000 (0:00:00.854) 0:02:37.911 **** 2026-02-04 02:33:24.256186 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:24.256197 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:24.256208 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:24.256218 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256229 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256240 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256260 | orchestrator | 2026-02-04 02:33:24.256271 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-04 02:33:24.256282 | orchestrator | Wednesday 04 February 2026 02:33:16 +0000 (0:00:00.612) 0:02:38.524 **** 2026-02-04 02:33:24.256293 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:24.256304 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:24.256315 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:24.256326 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256336 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256347 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256358 | orchestrator | 2026-02-04 02:33:24.256369 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-04 02:33:24.256380 | orchestrator | Wednesday 04 February 2026 02:33:17 +0000 (0:00:00.870) 0:02:39.394 **** 2026-02-04 02:33:24.256391 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:24.256402 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:24.256413 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:24.256424 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256435 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256445 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256456 | orchestrator | 2026-02-04 02:33:24.256486 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-04 02:33:24.256498 | orchestrator | Wednesday 04 February 2026 02:33:18 +0000 (0:00:00.656) 0:02:40.051 **** 2026-02-04 02:33:24.256509 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:24.256520 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:24.256531 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:24.256541 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256552 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256563 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256574 | orchestrator | 2026-02-04 02:33:24.256585 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-04 02:33:24.256596 | orchestrator | Wednesday 04 February 2026 02:33:19 +0000 (0:00:00.877) 0:02:40.929 **** 2026-02-04 02:33:24.256607 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:24.256618 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:24.256629 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:24.256640 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256651 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256662 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256673 | orchestrator | 2026-02-04 02:33:24.256684 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-04 02:33:24.256695 | orchestrator | Wednesday 04 February 2026 02:33:19 +0000 (0:00:00.591) 0:02:41.521 **** 2026-02-04 02:33:24.256706 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256717 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256728 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256739 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:24.256750 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:24.256761 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:24.256772 | orchestrator | 2026-02-04 02:33:24.256783 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-04 02:33:24.256797 | orchestrator | Wednesday 04 February 2026 02:33:22 +0000 (0:00:02.672) 0:02:44.193 **** 2026-02-04 02:33:24.256815 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:24.256834 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:24.256850 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:24.256868 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.256887 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.256905 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.256924 | orchestrator | 2026-02-04 02:33:24.256942 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-04 02:33:24.256972 | orchestrator | Wednesday 04 February 2026 02:33:22 +0000 (0:00:00.621) 0:02:44.815 **** 2026-02-04 02:33:24.256984 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:24.256995 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:24.257006 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:24.257017 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:24.257028 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:24.257038 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:24.257049 | orchestrator | 2026-02-04 02:33:24.257060 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-04 02:33:24.257071 | orchestrator | Wednesday 04 February 2026 02:33:23 +0000 (0:00:00.885) 0:02:45.701 **** 2026-02-04 02:33:24.257082 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:24.257093 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:24.257118 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:38.633627 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.633723 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.633734 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.633743 | orchestrator | 2026-02-04 02:33:38.633753 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-04 02:33:38.633762 | orchestrator | Wednesday 04 February 2026 02:33:24 +0000 (0:00:00.646) 0:02:46.347 **** 2026-02-04 02:33:38.633771 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:38.633781 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:38.633788 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 02:33:38.633796 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.633805 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.633818 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.633830 | orchestrator | 2026-02-04 02:33:38.633849 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-04 02:33:38.633863 | orchestrator | Wednesday 04 February 2026 02:33:25 +0000 (0:00:00.928) 0:02:47.276 **** 2026-02-04 02:33:38.633877 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-04 02:33:38.633892 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-04 02:33:38.633906 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-04 02:33:38.633918 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-04 02:33:38.633931 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.633944 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-04 02:33:38.633982 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-04 02:33:38.633996 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:38.634004 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:38.634011 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634067 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634075 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634083 | orchestrator | 2026-02-04 02:33:38.634090 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-04 02:33:38.634097 | orchestrator | Wednesday 04 February 2026 02:33:26 +0000 (0:00:00.662) 0:02:47.938 **** 2026-02-04 02:33:38.634105 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.634112 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:38.634119 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:38.634126 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634134 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634141 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634148 | orchestrator | 2026-02-04 02:33:38.634155 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-04 02:33:38.634162 | orchestrator | Wednesday 04 February 2026 02:33:26 +0000 (0:00:00.877) 0:02:48.816 **** 2026-02-04 02:33:38.634170 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.634178 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:38.634187 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:38.634195 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634203 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634212 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634220 | orchestrator | 2026-02-04 02:33:38.634229 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 02:33:38.634238 | orchestrator | Wednesday 04 February 2026 02:33:27 +0000 (0:00:00.636) 0:02:49.452 **** 2026-02-04 02:33:38.634274 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.634284 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:38.634292 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:38.634301 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634310 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634318 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634326 | orchestrator | 2026-02-04 02:33:38.634335 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 02:33:38.634343 | orchestrator | Wednesday 04 February 2026 02:33:28 +0000 (0:00:00.941) 0:02:50.394 **** 2026-02-04 02:33:38.634352 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.634360 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:38.634369 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:38.634377 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634386 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634394 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634403 | orchestrator | 2026-02-04 02:33:38.634411 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 02:33:38.634420 | orchestrator | Wednesday 04 February 2026 02:33:29 +0000 (0:00:00.816) 0:02:51.210 **** 2026-02-04 02:33:38.634428 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.634437 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:38.634445 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:38.634453 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634512 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634522 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634539 | orchestrator | 2026-02-04 02:33:38.634548 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 02:33:38.634555 | orchestrator | Wednesday 04 February 2026 02:33:30 +0000 (0:00:00.735) 0:02:51.946 **** 2026-02-04 02:33:38.634562 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:38.634571 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:38.634578 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:38.634585 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634592 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634599 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634606 | orchestrator | 2026-02-04 02:33:38.634614 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 02:33:38.634621 | orchestrator | Wednesday 04 February 2026 02:33:31 +0000 (0:00:00.923) 0:02:52.869 **** 2026-02-04 02:33:38.634628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:33:38.634636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:33:38.634643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:33:38.634651 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.634658 | orchestrator | 2026-02-04 02:33:38.634665 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 02:33:38.634672 | orchestrator | Wednesday 04 February 2026 02:33:31 +0000 (0:00:00.457) 0:02:53.326 **** 2026-02-04 02:33:38.634680 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:33:38.634687 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:33:38.634694 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:33:38.634701 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.634708 | orchestrator | 2026-02-04 02:33:38.634716 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 02:33:38.634723 | orchestrator | Wednesday 04 February 2026 02:33:31 +0000 (0:00:00.442) 0:02:53.769 **** 2026-02-04 02:33:38.634730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:33:38.634737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:33:38.634745 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:33:38.634752 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:38.634759 | orchestrator | 2026-02-04 02:33:38.634766 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 02:33:38.634773 | orchestrator | Wednesday 04 February 2026 02:33:32 +0000 (0:00:00.436) 0:02:54.205 **** 2026-02-04 02:33:38.634780 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:38.634788 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:38.634795 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:38.634802 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634809 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634816 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634823 | orchestrator | 2026-02-04 02:33:38.634830 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 02:33:38.634838 | orchestrator | Wednesday 04 February 2026 02:33:32 +0000 (0:00:00.640) 0:02:54.846 **** 2026-02-04 02:33:38.634845 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 02:33:38.634852 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 02:33:38.634859 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 02:33:38.634866 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-04 02:33:38.634874 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:38.634881 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-04 02:33:38.634888 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:38.634895 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-04 02:33:38.634902 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:38.634910 | orchestrator | 2026-02-04 02:33:38.634917 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-04 02:33:38.634929 | orchestrator | Wednesday 04 February 2026 02:33:34 +0000 (0:00:01.851) 0:02:56.697 **** 2026-02-04 02:33:38.634937 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:33:38.634944 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:33:38.634951 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:33:38.634958 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:33:38.634965 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:33:38.634972 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:33:38.634979 | orchestrator | 2026-02-04 02:33:38.634986 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 02:33:38.634994 | orchestrator | Wednesday 04 February 2026 02:33:37 +0000 (0:00:02.795) 0:02:59.493 **** 2026-02-04 02:33:38.635001 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:33:38.635026 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:33:55.317936 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:33:55.318077 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:33:55.318089 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:33:55.318098 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:33:55.318106 | orchestrator | 2026-02-04 02:33:55.318114 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-04 02:33:55.318123 | orchestrator | Wednesday 04 February 2026 02:33:38 +0000 (0:00:00.994) 0:03:00.487 **** 2026-02-04 02:33:55.318131 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318139 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:55.318146 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:55.318154 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:33:55.318162 | orchestrator | 2026-02-04 02:33:55.318170 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-04 02:33:55.318177 | orchestrator | Wednesday 04 February 2026 02:33:39 +0000 (0:00:01.111) 0:03:01.599 **** 2026-02-04 02:33:55.318185 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:33:55.318193 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:33:55.318200 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:33:55.318208 | orchestrator | 2026-02-04 02:33:55.318215 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-04 02:33:55.318223 | orchestrator | Wednesday 04 February 2026 02:33:40 +0000 (0:00:00.339) 0:03:01.938 **** 2026-02-04 02:33:55.318230 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:33:55.318237 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:33:55.318245 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:33:55.318252 | orchestrator | 2026-02-04 02:33:55.318259 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-04 02:33:55.318267 | orchestrator | Wednesday 04 February 2026 02:33:41 +0000 (0:00:01.480) 0:03:03.419 **** 2026-02-04 02:33:55.318274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 02:33:55.318283 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 02:33:55.318290 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 02:33:55.318297 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:55.318305 | orchestrator | 2026-02-04 02:33:55.318312 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-04 02:33:55.318320 | orchestrator | Wednesday 04 February 2026 02:33:42 +0000 (0:00:00.655) 0:03:04.074 **** 2026-02-04 02:33:55.318327 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:33:55.318335 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:33:55.318343 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:33:55.318350 | orchestrator | 2026-02-04 02:33:55.318358 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-04 02:33:55.318365 | orchestrator | Wednesday 04 February 2026 02:33:42 +0000 (0:00:00.331) 0:03:04.406 **** 2026-02-04 02:33:55.318372 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:55.318380 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:55.318387 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:55.318414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:33:55.318422 | orchestrator | 2026-02-04 02:33:55.318429 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-04 02:33:55.318436 | orchestrator | Wednesday 04 February 2026 02:33:43 +0000 (0:00:01.052) 0:03:05.458 **** 2026-02-04 02:33:55.318444 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:33:55.318451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:33:55.318458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:33:55.318503 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318513 | orchestrator | 2026-02-04 02:33:55.318522 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-04 02:33:55.318530 | orchestrator | Wednesday 04 February 2026 02:33:44 +0000 (0:00:00.441) 0:03:05.900 **** 2026-02-04 02:33:55.318539 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318547 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:55.318555 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:55.318564 | orchestrator | 2026-02-04 02:33:55.318572 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-04 02:33:55.318581 | orchestrator | Wednesday 04 February 2026 02:33:44 +0000 (0:00:00.357) 0:03:06.257 **** 2026-02-04 02:33:55.318589 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318598 | orchestrator | 2026-02-04 02:33:55.318607 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-04 02:33:55.318615 | orchestrator | Wednesday 04 February 2026 02:33:44 +0000 (0:00:00.283) 0:03:06.541 **** 2026-02-04 02:33:55.318624 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318632 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:55.318641 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:55.318650 | orchestrator | 2026-02-04 02:33:55.318658 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-04 02:33:55.318666 | orchestrator | Wednesday 04 February 2026 02:33:45 +0000 (0:00:00.336) 0:03:06.877 **** 2026-02-04 02:33:55.318675 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318683 | orchestrator | 2026-02-04 02:33:55.318691 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-04 02:33:55.318700 | orchestrator | Wednesday 04 February 2026 02:33:45 +0000 (0:00:00.732) 0:03:07.610 **** 2026-02-04 02:33:55.318708 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318716 | orchestrator | 2026-02-04 02:33:55.318725 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-04 02:33:55.318734 | orchestrator | Wednesday 04 February 2026 02:33:45 +0000 (0:00:00.243) 0:03:07.853 **** 2026-02-04 02:33:55.318742 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318751 | orchestrator | 2026-02-04 02:33:55.318759 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-04 02:33:55.318767 | orchestrator | Wednesday 04 February 2026 02:33:46 +0000 (0:00:00.157) 0:03:08.010 **** 2026-02-04 02:33:55.318788 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318797 | orchestrator | 2026-02-04 02:33:55.318821 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-04 02:33:55.318830 | orchestrator | Wednesday 04 February 2026 02:33:46 +0000 (0:00:00.243) 0:03:08.254 **** 2026-02-04 02:33:55.318839 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318848 | orchestrator | 2026-02-04 02:33:55.318856 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-04 02:33:55.318864 | orchestrator | Wednesday 04 February 2026 02:33:46 +0000 (0:00:00.241) 0:03:08.495 **** 2026-02-04 02:33:55.318871 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:33:55.318879 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:33:55.318886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:33:55.318899 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318907 | orchestrator | 2026-02-04 02:33:55.318914 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-04 02:33:55.318921 | orchestrator | Wednesday 04 February 2026 02:33:47 +0000 (0:00:00.415) 0:03:08.910 **** 2026-02-04 02:33:55.318929 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318936 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:33:55.318943 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:33:55.318950 | orchestrator | 2026-02-04 02:33:55.318958 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-04 02:33:55.318965 | orchestrator | Wednesday 04 February 2026 02:33:47 +0000 (0:00:00.310) 0:03:09.220 **** 2026-02-04 02:33:55.318972 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.318979 | orchestrator | 2026-02-04 02:33:55.318986 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-04 02:33:55.318994 | orchestrator | Wednesday 04 February 2026 02:33:47 +0000 (0:00:00.233) 0:03:09.454 **** 2026-02-04 02:33:55.319001 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.319008 | orchestrator | 2026-02-04 02:33:55.319016 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-04 02:33:55.319023 | orchestrator | Wednesday 04 February 2026 02:33:47 +0000 (0:00:00.242) 0:03:09.697 **** 2026-02-04 02:33:55.319030 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:55.319037 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:55.319045 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:55.319052 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:33:55.319059 | orchestrator | 2026-02-04 02:33:55.319067 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-04 02:33:55.319074 | orchestrator | Wednesday 04 February 2026 02:33:48 +0000 (0:00:01.129) 0:03:10.826 **** 2026-02-04 02:33:55.319081 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:55.319089 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:55.319096 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:55.319103 | orchestrator | 2026-02-04 02:33:55.319111 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-04 02:33:55.319118 | orchestrator | Wednesday 04 February 2026 02:33:49 +0000 (0:00:00.326) 0:03:11.152 **** 2026-02-04 02:33:55.319125 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:33:55.319132 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:33:55.319140 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:33:55.319147 | orchestrator | 2026-02-04 02:33:55.319154 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-04 02:33:55.319162 | orchestrator | Wednesday 04 February 2026 02:33:50 +0000 (0:00:01.471) 0:03:12.624 **** 2026-02-04 02:33:55.319169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:33:55.319176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:33:55.319183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:33:55.319191 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:33:55.319198 | orchestrator | 2026-02-04 02:33:55.319206 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-04 02:33:55.319213 | orchestrator | Wednesday 04 February 2026 02:33:51 +0000 (0:00:00.654) 0:03:13.278 **** 2026-02-04 02:33:55.319220 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:55.319228 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:55.319235 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:55.319242 | orchestrator | 2026-02-04 02:33:55.319250 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-04 02:33:55.319257 | orchestrator | Wednesday 04 February 2026 02:33:51 +0000 (0:00:00.361) 0:03:13.639 **** 2026-02-04 02:33:55.319264 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:33:55.319271 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:33:55.319279 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:33:55.319290 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:33:55.319298 | orchestrator | 2026-02-04 02:33:55.319305 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-04 02:33:55.319313 | orchestrator | Wednesday 04 February 2026 02:33:52 +0000 (0:00:01.093) 0:03:14.733 **** 2026-02-04 02:33:55.319320 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:33:55.319327 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:33:55.319335 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:33:55.319342 | orchestrator | 2026-02-04 02:33:55.319349 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-04 02:33:55.319357 | orchestrator | Wednesday 04 February 2026 02:33:53 +0000 (0:00:00.372) 0:03:15.105 **** 2026-02-04 02:33:55.319364 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:33:55.319371 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:33:55.319378 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:33:55.319385 | orchestrator | 2026-02-04 02:33:55.319393 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-04 02:33:55.319400 | orchestrator | Wednesday 04 February 2026 02:33:54 +0000 (0:00:01.184) 0:03:16.290 **** 2026-02-04 02:33:55.319408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:33:55.319415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:33:55.319432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:34:11.834370 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:34:11.834533 | orchestrator | 2026-02-04 02:34:11.834553 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-04 02:34:11.834566 | orchestrator | Wednesday 04 February 2026 02:33:55 +0000 (0:00:00.877) 0:03:17.167 **** 2026-02-04 02:34:11.834578 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:34:11.834591 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:34:11.834602 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:34:11.834614 | orchestrator | 2026-02-04 02:34:11.834625 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-04 02:34:11.834636 | orchestrator | Wednesday 04 February 2026 02:33:55 +0000 (0:00:00.558) 0:03:17.726 **** 2026-02-04 02:34:11.834647 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:34:11.834659 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:34:11.834670 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:34:11.834681 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.834692 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.834703 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.834714 | orchestrator | 2026-02-04 02:34:11.834725 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-04 02:34:11.834736 | orchestrator | Wednesday 04 February 2026 02:33:56 +0000 (0:00:00.641) 0:03:18.368 **** 2026-02-04 02:34:11.834747 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:34:11.834758 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:34:11.834769 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:34:11.834780 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:34:11.834792 | orchestrator | 2026-02-04 02:34:11.834803 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-04 02:34:11.834814 | orchestrator | Wednesday 04 February 2026 02:33:57 +0000 (0:00:01.101) 0:03:19.470 **** 2026-02-04 02:34:11.834825 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.834836 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.834847 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.834858 | orchestrator | 2026-02-04 02:34:11.834870 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-04 02:34:11.834881 | orchestrator | Wednesday 04 February 2026 02:33:57 +0000 (0:00:00.339) 0:03:19.809 **** 2026-02-04 02:34:11.834892 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:34:11.834928 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:34:11.834940 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:34:11.834951 | orchestrator | 2026-02-04 02:34:11.834962 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-04 02:34:11.834973 | orchestrator | Wednesday 04 February 2026 02:33:59 +0000 (0:00:01.166) 0:03:20.976 **** 2026-02-04 02:34:11.834985 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 02:34:11.834996 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 02:34:11.835008 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 02:34:11.835019 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.835030 | orchestrator | 2026-02-04 02:34:11.835041 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-04 02:34:11.835052 | orchestrator | Wednesday 04 February 2026 02:34:00 +0000 (0:00:01.124) 0:03:22.100 **** 2026-02-04 02:34:11.835063 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.835074 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.835085 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.835096 | orchestrator | 2026-02-04 02:34:11.835108 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-04 02:34:11.835119 | orchestrator | 2026-02-04 02:34:11.835130 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 02:34:11.835141 | orchestrator | Wednesday 04 February 2026 02:34:00 +0000 (0:00:00.623) 0:03:22.724 **** 2026-02-04 02:34:11.835152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:34:11.835165 | orchestrator | 2026-02-04 02:34:11.835176 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 02:34:11.835187 | orchestrator | Wednesday 04 February 2026 02:34:01 +0000 (0:00:00.794) 0:03:23.518 **** 2026-02-04 02:34:11.835198 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:34:11.835209 | orchestrator | 2026-02-04 02:34:11.835221 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 02:34:11.835232 | orchestrator | Wednesday 04 February 2026 02:34:02 +0000 (0:00:00.598) 0:03:24.117 **** 2026-02-04 02:34:11.835243 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.835254 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.835265 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.835276 | orchestrator | 2026-02-04 02:34:11.835288 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 02:34:11.835299 | orchestrator | Wednesday 04 February 2026 02:34:02 +0000 (0:00:00.736) 0:03:24.853 **** 2026-02-04 02:34:11.835310 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.835321 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.835331 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.835342 | orchestrator | 2026-02-04 02:34:11.835353 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 02:34:11.835364 | orchestrator | Wednesday 04 February 2026 02:34:03 +0000 (0:00:00.575) 0:03:25.429 **** 2026-02-04 02:34:11.835376 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.835387 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.835398 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.835408 | orchestrator | 2026-02-04 02:34:11.835419 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 02:34:11.835431 | orchestrator | Wednesday 04 February 2026 02:34:03 +0000 (0:00:00.321) 0:03:25.750 **** 2026-02-04 02:34:11.835442 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.835453 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.835500 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.835513 | orchestrator | 2026-02-04 02:34:11.835541 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 02:34:11.835553 | orchestrator | Wednesday 04 February 2026 02:34:04 +0000 (0:00:00.305) 0:03:26.056 **** 2026-02-04 02:34:11.835573 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.835584 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.835595 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.835606 | orchestrator | 2026-02-04 02:34:11.835617 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 02:34:11.835628 | orchestrator | Wednesday 04 February 2026 02:34:04 +0000 (0:00:00.757) 0:03:26.814 **** 2026-02-04 02:34:11.835639 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.835650 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.835661 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.835673 | orchestrator | 2026-02-04 02:34:11.835683 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 02:34:11.835694 | orchestrator | Wednesday 04 February 2026 02:34:05 +0000 (0:00:00.587) 0:03:27.402 **** 2026-02-04 02:34:11.835706 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.835716 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.835727 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.835738 | orchestrator | 2026-02-04 02:34:11.835749 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 02:34:11.835760 | orchestrator | Wednesday 04 February 2026 02:34:05 +0000 (0:00:00.347) 0:03:27.749 **** 2026-02-04 02:34:11.835771 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.835782 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.835793 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.835804 | orchestrator | 2026-02-04 02:34:11.835815 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 02:34:11.835826 | orchestrator | Wednesday 04 February 2026 02:34:06 +0000 (0:00:00.706) 0:03:28.456 **** 2026-02-04 02:34:11.835837 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.835848 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.835858 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.835869 | orchestrator | 2026-02-04 02:34:11.835881 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 02:34:11.835892 | orchestrator | Wednesday 04 February 2026 02:34:07 +0000 (0:00:00.730) 0:03:29.187 **** 2026-02-04 02:34:11.835903 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.835914 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.835925 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.835936 | orchestrator | 2026-02-04 02:34:11.835947 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 02:34:11.835958 | orchestrator | Wednesday 04 February 2026 02:34:07 +0000 (0:00:00.573) 0:03:29.761 **** 2026-02-04 02:34:11.835969 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.835980 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.835991 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.836002 | orchestrator | 2026-02-04 02:34:11.836013 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 02:34:11.836024 | orchestrator | Wednesday 04 February 2026 02:34:08 +0000 (0:00:00.377) 0:03:30.139 **** 2026-02-04 02:34:11.836035 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.836046 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.836057 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.836068 | orchestrator | 2026-02-04 02:34:11.836079 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 02:34:11.836090 | orchestrator | Wednesday 04 February 2026 02:34:08 +0000 (0:00:00.351) 0:03:30.490 **** 2026-02-04 02:34:11.836101 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.836112 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.836124 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.836135 | orchestrator | 2026-02-04 02:34:11.836146 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 02:34:11.836157 | orchestrator | Wednesday 04 February 2026 02:34:08 +0000 (0:00:00.335) 0:03:30.825 **** 2026-02-04 02:34:11.836168 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.836185 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.836196 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.836208 | orchestrator | 2026-02-04 02:34:11.836218 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 02:34:11.836229 | orchestrator | Wednesday 04 February 2026 02:34:09 +0000 (0:00:00.556) 0:03:31.382 **** 2026-02-04 02:34:11.836240 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.836251 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.836262 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.836273 | orchestrator | 2026-02-04 02:34:11.836284 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 02:34:11.836295 | orchestrator | Wednesday 04 February 2026 02:34:09 +0000 (0:00:00.343) 0:03:31.725 **** 2026-02-04 02:34:11.836306 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:34:11.836317 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:34:11.836328 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:34:11.836339 | orchestrator | 2026-02-04 02:34:11.836350 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 02:34:11.836361 | orchestrator | Wednesday 04 February 2026 02:34:10 +0000 (0:00:00.334) 0:03:32.060 **** 2026-02-04 02:34:11.836372 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.836383 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.836394 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.836405 | orchestrator | 2026-02-04 02:34:11.836416 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 02:34:11.836427 | orchestrator | Wednesday 04 February 2026 02:34:10 +0000 (0:00:00.365) 0:03:32.426 **** 2026-02-04 02:34:11.836438 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.836449 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.836460 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.836487 | orchestrator | 2026-02-04 02:34:11.836498 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 02:34:11.836509 | orchestrator | Wednesday 04 February 2026 02:34:11 +0000 (0:00:00.689) 0:03:33.115 **** 2026-02-04 02:34:11.836520 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:34:11.836531 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:34:11.836542 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:34:11.836553 | orchestrator | 2026-02-04 02:34:11.836570 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-04 02:34:11.836610 | orchestrator | Wednesday 04 February 2026 02:34:11 +0000 (0:00:00.570) 0:03:33.685 **** 2026-02-04 02:35:00.093149 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:00.093241 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:00.093251 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:00.093258 | orchestrator | 2026-02-04 02:35:00.093266 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-04 02:35:00.093275 | orchestrator | Wednesday 04 February 2026 02:34:12 +0000 (0:00:00.342) 0:03:34.028 **** 2026-02-04 02:35:00.093283 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:35:00.093290 | orchestrator | 2026-02-04 02:35:00.093297 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-04 02:35:00.093304 | orchestrator | Wednesday 04 February 2026 02:34:13 +0000 (0:00:00.851) 0:03:34.880 **** 2026-02-04 02:35:00.093311 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:00.093319 | orchestrator | 2026-02-04 02:35:00.093325 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-04 02:35:00.093332 | orchestrator | Wednesday 04 February 2026 02:34:13 +0000 (0:00:00.176) 0:03:35.057 **** 2026-02-04 02:35:00.093339 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 02:35:00.093346 | orchestrator | 2026-02-04 02:35:00.093353 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-04 02:35:00.093359 | orchestrator | Wednesday 04 February 2026 02:34:14 +0000 (0:00:00.989) 0:03:36.046 **** 2026-02-04 02:35:00.093385 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:00.093392 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:00.093398 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:00.093405 | orchestrator | 2026-02-04 02:35:00.093415 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-04 02:35:00.093425 | orchestrator | Wednesday 04 February 2026 02:34:14 +0000 (0:00:00.338) 0:03:36.384 **** 2026-02-04 02:35:00.093435 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:00.093447 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:00.093457 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:00.093468 | orchestrator | 2026-02-04 02:35:00.093479 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-04 02:35:00.093545 | orchestrator | Wednesday 04 February 2026 02:34:15 +0000 (0:00:00.599) 0:03:36.984 **** 2026-02-04 02:35:00.093557 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.093568 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.093580 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.093591 | orchestrator | 2026-02-04 02:35:00.093602 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-04 02:35:00.093614 | orchestrator | Wednesday 04 February 2026 02:34:16 +0000 (0:00:01.152) 0:03:38.136 **** 2026-02-04 02:35:00.093622 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.093629 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.093636 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.093643 | orchestrator | 2026-02-04 02:35:00.093650 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-04 02:35:00.093656 | orchestrator | Wednesday 04 February 2026 02:34:17 +0000 (0:00:00.784) 0:03:38.921 **** 2026-02-04 02:35:00.093663 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.093670 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.093676 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.093683 | orchestrator | 2026-02-04 02:35:00.093690 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-04 02:35:00.093698 | orchestrator | Wednesday 04 February 2026 02:34:17 +0000 (0:00:00.759) 0:03:39.681 **** 2026-02-04 02:35:00.093709 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:00.093718 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:00.093725 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:00.093734 | orchestrator | 2026-02-04 02:35:00.093742 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-04 02:35:00.093751 | orchestrator | Wednesday 04 February 2026 02:34:18 +0000 (0:00:00.936) 0:03:40.617 **** 2026-02-04 02:35:00.093759 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.093766 | orchestrator | 2026-02-04 02:35:00.093796 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-04 02:35:00.093804 | orchestrator | Wednesday 04 February 2026 02:34:19 +0000 (0:00:01.201) 0:03:41.819 **** 2026-02-04 02:35:00.093812 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:00.093820 | orchestrator | 2026-02-04 02:35:00.093828 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-04 02:35:00.093836 | orchestrator | Wednesday 04 February 2026 02:34:20 +0000 (0:00:00.697) 0:03:42.516 **** 2026-02-04 02:35:00.093844 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 02:35:00.093852 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:35:00.093860 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:35:00.093868 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 02:35:00.093876 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-04 02:35:00.093884 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 02:35:00.093892 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 02:35:00.093899 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-04 02:35:00.093908 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 02:35:00.093924 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-04 02:35:00.093933 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-04 02:35:00.093941 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-04 02:35:00.093950 | orchestrator | 2026-02-04 02:35:00.093958 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-04 02:35:00.093967 | orchestrator | Wednesday 04 February 2026 02:34:23 +0000 (0:00:02.860) 0:03:45.377 **** 2026-02-04 02:35:00.093975 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.093983 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.094004 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.094012 | orchestrator | 2026-02-04 02:35:00.094068 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-04 02:35:00.094093 | orchestrator | Wednesday 04 February 2026 02:34:24 +0000 (0:00:01.112) 0:03:46.489 **** 2026-02-04 02:35:00.094101 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:00.094108 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:00.094115 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:00.094130 | orchestrator | 2026-02-04 02:35:00.094137 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-04 02:35:00.094144 | orchestrator | Wednesday 04 February 2026 02:34:25 +0000 (0:00:00.630) 0:03:47.120 **** 2026-02-04 02:35:00.094150 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:00.094157 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:00.094164 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:00.094170 | orchestrator | 2026-02-04 02:35:00.094177 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-04 02:35:00.094184 | orchestrator | Wednesday 04 February 2026 02:34:25 +0000 (0:00:00.367) 0:03:47.487 **** 2026-02-04 02:35:00.094191 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.094198 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.094205 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.094218 | orchestrator | 2026-02-04 02:35:00.094225 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-04 02:35:00.094232 | orchestrator | Wednesday 04 February 2026 02:34:26 +0000 (0:00:01.350) 0:03:48.838 **** 2026-02-04 02:35:00.094239 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.094245 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.094252 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.094259 | orchestrator | 2026-02-04 02:35:00.094265 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-04 02:35:00.094272 | orchestrator | Wednesday 04 February 2026 02:34:28 +0000 (0:00:01.218) 0:03:50.056 **** 2026-02-04 02:35:00.094279 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:00.094285 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:00.094292 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:00.094299 | orchestrator | 2026-02-04 02:35:00.094305 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-04 02:35:00.094312 | orchestrator | Wednesday 04 February 2026 02:34:28 +0000 (0:00:00.606) 0:03:50.663 **** 2026-02-04 02:35:00.094319 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:35:00.094326 | orchestrator | 2026-02-04 02:35:00.094332 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-04 02:35:00.094339 | orchestrator | Wednesday 04 February 2026 02:34:29 +0000 (0:00:00.559) 0:03:51.222 **** 2026-02-04 02:35:00.094346 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:00.094353 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:00.094360 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:00.094366 | orchestrator | 2026-02-04 02:35:00.094373 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-04 02:35:00.094380 | orchestrator | Wednesday 04 February 2026 02:34:29 +0000 (0:00:00.335) 0:03:51.558 **** 2026-02-04 02:35:00.094386 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:00.094403 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:00.094410 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:00.094417 | orchestrator | 2026-02-04 02:35:00.094423 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-04 02:35:00.094430 | orchestrator | Wednesday 04 February 2026 02:34:30 +0000 (0:00:00.553) 0:03:52.111 **** 2026-02-04 02:35:00.094437 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:35:00.094445 | orchestrator | 2026-02-04 02:35:00.094452 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-04 02:35:00.094459 | orchestrator | Wednesday 04 February 2026 02:34:30 +0000 (0:00:00.578) 0:03:52.689 **** 2026-02-04 02:35:00.094465 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.094472 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.094479 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.094508 | orchestrator | 2026-02-04 02:35:00.094519 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-04 02:35:00.094526 | orchestrator | Wednesday 04 February 2026 02:34:32 +0000 (0:00:01.726) 0:03:54.416 **** 2026-02-04 02:35:00.094533 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.094540 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.094546 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.094553 | orchestrator | 2026-02-04 02:35:00.094560 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-04 02:35:00.094566 | orchestrator | Wednesday 04 February 2026 02:34:33 +0000 (0:00:01.356) 0:03:55.772 **** 2026-02-04 02:35:00.094573 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.094580 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.094586 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.094593 | orchestrator | 2026-02-04 02:35:00.094600 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-04 02:35:00.094606 | orchestrator | Wednesday 04 February 2026 02:34:35 +0000 (0:00:01.711) 0:03:57.484 **** 2026-02-04 02:35:00.094613 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:35:00.094620 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:35:00.094681 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:35:00.094690 | orchestrator | 2026-02-04 02:35:00.094697 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-04 02:35:00.094704 | orchestrator | Wednesday 04 February 2026 02:34:37 +0000 (0:00:01.926) 0:03:59.411 **** 2026-02-04 02:35:00.094711 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:35:00.094718 | orchestrator | 2026-02-04 02:35:00.094725 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-04 02:35:00.094732 | orchestrator | Wednesday 04 February 2026 02:34:38 +0000 (0:00:00.803) 0:04:00.215 **** 2026-02-04 02:35:00.094745 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-04 02:35:00.094752 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:00.094759 | orchestrator | 2026-02-04 02:35:00.094772 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-04 02:35:33.854582 | orchestrator | Wednesday 04 February 2026 02:35:00 +0000 (0:00:21.724) 0:04:21.939 **** 2026-02-04 02:35:33.854690 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:33.854705 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:33.854715 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:33.854725 | orchestrator | 2026-02-04 02:35:33.854736 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-04 02:35:33.854746 | orchestrator | Wednesday 04 February 2026 02:35:08 +0000 (0:00:08.317) 0:04:30.256 **** 2026-02-04 02:35:33.854756 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.854767 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.854776 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.854810 | orchestrator | 2026-02-04 02:35:33.854821 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-04 02:35:33.854831 | orchestrator | Wednesday 04 February 2026 02:35:08 +0000 (0:00:00.341) 0:04:30.598 **** 2026-02-04 02:35:33.854843 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1b83e9e87f64fb72d04c9f2e27469ac3916286a'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-04 02:35:33.854855 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1b83e9e87f64fb72d04c9f2e27469ac3916286a'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-04 02:35:33.854867 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1b83e9e87f64fb72d04c9f2e27469ac3916286a'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-04 02:35:33.854878 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1b83e9e87f64fb72d04c9f2e27469ac3916286a'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-04 02:35:33.854889 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1b83e9e87f64fb72d04c9f2e27469ac3916286a'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-04 02:35:33.854899 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1b83e9e87f64fb72d04c9f2e27469ac3916286a'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__a1b83e9e87f64fb72d04c9f2e27469ac3916286a'}])  2026-02-04 02:35:33.854911 | orchestrator | 2026-02-04 02:35:33.854921 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 02:35:33.854931 | orchestrator | Wednesday 04 February 2026 02:35:23 +0000 (0:00:14.323) 0:04:44.921 **** 2026-02-04 02:35:33.854940 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.854950 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.854959 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.854969 | orchestrator | 2026-02-04 02:35:33.854979 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-04 02:35:33.854988 | orchestrator | Wednesday 04 February 2026 02:35:23 +0000 (0:00:00.353) 0:04:45.275 **** 2026-02-04 02:35:33.854999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:35:33.855008 | orchestrator | 2026-02-04 02:35:33.855018 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-04 02:35:33.855028 | orchestrator | Wednesday 04 February 2026 02:35:24 +0000 (0:00:00.819) 0:04:46.095 **** 2026-02-04 02:35:33.855037 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:33.855049 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:33.855061 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:33.855072 | orchestrator | 2026-02-04 02:35:33.855084 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-04 02:35:33.855101 | orchestrator | Wednesday 04 February 2026 02:35:24 +0000 (0:00:00.367) 0:04:46.462 **** 2026-02-04 02:35:33.855139 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.855152 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.855163 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.855174 | orchestrator | 2026-02-04 02:35:33.855201 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-04 02:35:33.855214 | orchestrator | Wednesday 04 February 2026 02:35:24 +0000 (0:00:00.379) 0:04:46.842 **** 2026-02-04 02:35:33.855225 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 02:35:33.855238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 02:35:33.855249 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 02:35:33.855260 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.855275 | orchestrator | 2026-02-04 02:35:33.855292 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-04 02:35:33.855309 | orchestrator | Wednesday 04 February 2026 02:35:25 +0000 (0:00:00.942) 0:04:47.784 **** 2026-02-04 02:35:33.855323 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:33.855336 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:33.855350 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:33.855375 | orchestrator | 2026-02-04 02:35:33.855392 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-04 02:35:33.855409 | orchestrator | 2026-02-04 02:35:33.855424 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 02:35:33.855441 | orchestrator | Wednesday 04 February 2026 02:35:26 +0000 (0:00:00.841) 0:04:48.626 **** 2026-02-04 02:35:33.855457 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:35:33.855476 | orchestrator | 2026-02-04 02:35:33.855515 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 02:35:33.855535 | orchestrator | Wednesday 04 February 2026 02:35:27 +0000 (0:00:00.583) 0:04:49.210 **** 2026-02-04 02:35:33.855550 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:35:33.855565 | orchestrator | 2026-02-04 02:35:33.855581 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 02:35:33.855597 | orchestrator | Wednesday 04 February 2026 02:35:28 +0000 (0:00:00.834) 0:04:50.044 **** 2026-02-04 02:35:33.855613 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:33.855623 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:33.855632 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:33.855642 | orchestrator | 2026-02-04 02:35:33.855652 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 02:35:33.855661 | orchestrator | Wednesday 04 February 2026 02:35:28 +0000 (0:00:00.746) 0:04:50.790 **** 2026-02-04 02:35:33.855671 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.855681 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.855690 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.855700 | orchestrator | 2026-02-04 02:35:33.855709 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 02:35:33.855719 | orchestrator | Wednesday 04 February 2026 02:35:29 +0000 (0:00:00.366) 0:04:51.157 **** 2026-02-04 02:35:33.855729 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.855738 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.855748 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.855757 | orchestrator | 2026-02-04 02:35:33.855767 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 02:35:33.855776 | orchestrator | Wednesday 04 February 2026 02:35:29 +0000 (0:00:00.317) 0:04:51.475 **** 2026-02-04 02:35:33.855786 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.855796 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.855815 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.855824 | orchestrator | 2026-02-04 02:35:33.855834 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 02:35:33.855844 | orchestrator | Wednesday 04 February 2026 02:35:30 +0000 (0:00:00.706) 0:04:52.182 **** 2026-02-04 02:35:33.855854 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:33.855871 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:33.855887 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:33.855903 | orchestrator | 2026-02-04 02:35:33.855919 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 02:35:33.855935 | orchestrator | Wednesday 04 February 2026 02:35:31 +0000 (0:00:00.748) 0:04:52.930 **** 2026-02-04 02:35:33.855950 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.855965 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.855981 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.855998 | orchestrator | 2026-02-04 02:35:33.856016 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 02:35:33.856032 | orchestrator | Wednesday 04 February 2026 02:35:31 +0000 (0:00:00.335) 0:04:53.266 **** 2026-02-04 02:35:33.856049 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.856065 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.856082 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.856094 | orchestrator | 2026-02-04 02:35:33.856104 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 02:35:33.856114 | orchestrator | Wednesday 04 February 2026 02:35:31 +0000 (0:00:00.328) 0:04:53.594 **** 2026-02-04 02:35:33.856124 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:33.856133 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:33.856143 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:33.856152 | orchestrator | 2026-02-04 02:35:33.856162 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 02:35:33.856172 | orchestrator | Wednesday 04 February 2026 02:35:32 +0000 (0:00:01.068) 0:04:54.663 **** 2026-02-04 02:35:33.856181 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:35:33.856191 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:35:33.856200 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:35:33.856210 | orchestrator | 2026-02-04 02:35:33.856220 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 02:35:33.856229 | orchestrator | Wednesday 04 February 2026 02:35:33 +0000 (0:00:00.723) 0:04:55.386 **** 2026-02-04 02:35:33.856239 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:35:33.856249 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:35:33.856267 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:35:33.856277 | orchestrator | 2026-02-04 02:35:33.856287 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 02:35:33.856306 | orchestrator | Wednesday 04 February 2026 02:35:33 +0000 (0:00:00.316) 0:04:55.702 **** 2026-02-04 02:36:04.493831 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:36:04.493949 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:36:04.493957 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:04.493962 | orchestrator | 2026-02-04 02:36:04.493967 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 02:36:04.493972 | orchestrator | Wednesday 04 February 2026 02:35:34 +0000 (0:00:00.358) 0:04:56.061 **** 2026-02-04 02:36:04.493976 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.493980 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.493984 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.493988 | orchestrator | 2026-02-04 02:36:04.493992 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 02:36:04.493995 | orchestrator | Wednesday 04 February 2026 02:35:34 +0000 (0:00:00.597) 0:04:56.659 **** 2026-02-04 02:36:04.493999 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494003 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494007 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494011 | orchestrator | 2026-02-04 02:36:04.494062 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 02:36:04.494067 | orchestrator | Wednesday 04 February 2026 02:35:35 +0000 (0:00:00.340) 0:04:56.999 **** 2026-02-04 02:36:04.494071 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494075 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494079 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494082 | orchestrator | 2026-02-04 02:36:04.494086 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 02:36:04.494090 | orchestrator | Wednesday 04 February 2026 02:35:35 +0000 (0:00:00.344) 0:04:57.344 **** 2026-02-04 02:36:04.494094 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494098 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494101 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494105 | orchestrator | 2026-02-04 02:36:04.494109 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 02:36:04.494113 | orchestrator | Wednesday 04 February 2026 02:35:35 +0000 (0:00:00.331) 0:04:57.675 **** 2026-02-04 02:36:04.494117 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494121 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494124 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494128 | orchestrator | 2026-02-04 02:36:04.494132 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 02:36:04.494136 | orchestrator | Wednesday 04 February 2026 02:35:36 +0000 (0:00:00.644) 0:04:58.320 **** 2026-02-04 02:36:04.494139 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:36:04.494143 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:36:04.494147 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:04.494151 | orchestrator | 2026-02-04 02:36:04.494154 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 02:36:04.494158 | orchestrator | Wednesday 04 February 2026 02:35:36 +0000 (0:00:00.346) 0:04:58.667 **** 2026-02-04 02:36:04.494162 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:36:04.494166 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:36:04.494170 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:04.494174 | orchestrator | 2026-02-04 02:36:04.494178 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 02:36:04.494181 | orchestrator | Wednesday 04 February 2026 02:35:37 +0000 (0:00:00.349) 0:04:59.016 **** 2026-02-04 02:36:04.494185 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:36:04.494189 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:36:04.494193 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:04.494196 | orchestrator | 2026-02-04 02:36:04.494200 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-04 02:36:04.494204 | orchestrator | Wednesday 04 February 2026 02:35:37 +0000 (0:00:00.812) 0:04:59.829 **** 2026-02-04 02:36:04.494208 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 02:36:04.494212 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:36:04.494217 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:36:04.494220 | orchestrator | 2026-02-04 02:36:04.494224 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-04 02:36:04.494228 | orchestrator | Wednesday 04 February 2026 02:35:38 +0000 (0:00:00.693) 0:05:00.523 **** 2026-02-04 02:36:04.494232 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:36:04.494237 | orchestrator | 2026-02-04 02:36:04.494240 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-04 02:36:04.494244 | orchestrator | Wednesday 04 February 2026 02:35:39 +0000 (0:00:00.565) 0:05:01.088 **** 2026-02-04 02:36:04.494248 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:36:04.494252 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:36:04.494255 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:36:04.494260 | orchestrator | 2026-02-04 02:36:04.494263 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-04 02:36:04.494271 | orchestrator | Wednesday 04 February 2026 02:35:40 +0000 (0:00:01.041) 0:05:02.129 **** 2026-02-04 02:36:04.494275 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494279 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494283 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494286 | orchestrator | 2026-02-04 02:36:04.494290 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-04 02:36:04.494295 | orchestrator | Wednesday 04 February 2026 02:35:40 +0000 (0:00:00.333) 0:05:02.462 **** 2026-02-04 02:36:04.494299 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 02:36:04.494303 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 02:36:04.494307 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 02:36:04.494311 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-04 02:36:04.494315 | orchestrator | 2026-02-04 02:36:04.494328 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-04 02:36:04.494332 | orchestrator | Wednesday 04 February 2026 02:35:50 +0000 (0:00:09.911) 0:05:12.374 **** 2026-02-04 02:36:04.494336 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:36:04.494351 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:36:04.494355 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:04.494359 | orchestrator | 2026-02-04 02:36:04.494363 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-04 02:36:04.494366 | orchestrator | Wednesday 04 February 2026 02:35:50 +0000 (0:00:00.371) 0:05:12.745 **** 2026-02-04 02:36:04.494370 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 02:36:04.494374 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 02:36:04.494378 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 02:36:04.494382 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-04 02:36:04.494387 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:36:04.494391 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:36:04.494396 | orchestrator | 2026-02-04 02:36:04.494400 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-04 02:36:04.494405 | orchestrator | Wednesday 04 February 2026 02:35:52 +0000 (0:00:01.954) 0:05:14.700 **** 2026-02-04 02:36:04.494409 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 02:36:04.494414 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 02:36:04.494418 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 02:36:04.494423 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 02:36:04.494427 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-04 02:36:04.494431 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-04 02:36:04.494436 | orchestrator | 2026-02-04 02:36:04.494440 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-04 02:36:04.494444 | orchestrator | Wednesday 04 February 2026 02:35:54 +0000 (0:00:01.519) 0:05:16.220 **** 2026-02-04 02:36:04.494449 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:36:04.494453 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:36:04.494458 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:04.494462 | orchestrator | 2026-02-04 02:36:04.494467 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-04 02:36:04.494472 | orchestrator | Wednesday 04 February 2026 02:35:55 +0000 (0:00:00.694) 0:05:16.915 **** 2026-02-04 02:36:04.494476 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494479 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494483 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494487 | orchestrator | 2026-02-04 02:36:04.494491 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-04 02:36:04.494494 | orchestrator | Wednesday 04 February 2026 02:35:55 +0000 (0:00:00.348) 0:05:17.263 **** 2026-02-04 02:36:04.494502 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494523 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494527 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494531 | orchestrator | 2026-02-04 02:36:04.494535 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-04 02:36:04.494539 | orchestrator | Wednesday 04 February 2026 02:35:55 +0000 (0:00:00.306) 0:05:17.570 **** 2026-02-04 02:36:04.494543 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:36:04.494547 | orchestrator | 2026-02-04 02:36:04.494551 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-04 02:36:04.494555 | orchestrator | Wednesday 04 February 2026 02:35:56 +0000 (0:00:00.830) 0:05:18.400 **** 2026-02-04 02:36:04.494558 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494562 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494566 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494570 | orchestrator | 2026-02-04 02:36:04.494573 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-04 02:36:04.494577 | orchestrator | Wednesday 04 February 2026 02:35:56 +0000 (0:00:00.353) 0:05:18.754 **** 2026-02-04 02:36:04.494581 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:04.494585 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:04.494588 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:36:04.494592 | orchestrator | 2026-02-04 02:36:04.494596 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-04 02:36:04.494600 | orchestrator | Wednesday 04 February 2026 02:35:57 +0000 (0:00:00.365) 0:05:19.120 **** 2026-02-04 02:36:04.494603 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:36:04.494607 | orchestrator | 2026-02-04 02:36:04.494611 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-04 02:36:04.494615 | orchestrator | Wednesday 04 February 2026 02:35:58 +0000 (0:00:00.815) 0:05:19.936 **** 2026-02-04 02:36:04.494619 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:36:04.494622 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:36:04.494626 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:36:04.494630 | orchestrator | 2026-02-04 02:36:04.494634 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-04 02:36:04.494637 | orchestrator | Wednesday 04 February 2026 02:35:59 +0000 (0:00:01.236) 0:05:21.172 **** 2026-02-04 02:36:04.494641 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:36:04.494645 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:36:04.494649 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:36:04.494652 | orchestrator | 2026-02-04 02:36:04.494656 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-04 02:36:04.494660 | orchestrator | Wednesday 04 February 2026 02:36:00 +0000 (0:00:01.142) 0:05:22.315 **** 2026-02-04 02:36:04.494664 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:36:04.494667 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:36:04.494671 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:36:04.494675 | orchestrator | 2026-02-04 02:36:04.494679 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-04 02:36:04.494685 | orchestrator | Wednesday 04 February 2026 02:36:02 +0000 (0:00:02.107) 0:05:24.423 **** 2026-02-04 02:36:04.494690 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:36:04.494693 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:36:04.494697 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:36:04.494701 | orchestrator | 2026-02-04 02:36:04.494707 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-04 02:36:55.003870 | orchestrator | Wednesday 04 February 2026 02:36:04 +0000 (0:00:01.905) 0:05:26.328 **** 2026-02-04 02:36:55.004043 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:55.004065 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:36:55.004078 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-04 02:36:55.004115 | orchestrator | 2026-02-04 02:36:55.004128 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-04 02:36:55.004139 | orchestrator | Wednesday 04 February 2026 02:36:04 +0000 (0:00:00.422) 0:05:26.751 **** 2026-02-04 02:36:55.004151 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-04 02:36:55.004164 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-04 02:36:55.004176 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-04 02:36:55.004188 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-04 02:36:55.004198 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:36:55.004211 | orchestrator | 2026-02-04 02:36:55.004223 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-04 02:36:55.004235 | orchestrator | Wednesday 04 February 2026 02:36:29 +0000 (0:00:24.205) 0:05:50.956 **** 2026-02-04 02:36:55.004247 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:36:55.004259 | orchestrator | 2026-02-04 02:36:55.004272 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-04 02:36:55.004284 | orchestrator | Wednesday 04 February 2026 02:36:30 +0000 (0:00:01.547) 0:05:52.503 **** 2026-02-04 02:36:55.004296 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:55.004308 | orchestrator | 2026-02-04 02:36:55.004319 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-04 02:36:55.004331 | orchestrator | Wednesday 04 February 2026 02:36:31 +0000 (0:00:00.585) 0:05:53.089 **** 2026-02-04 02:36:55.004345 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:55.004357 | orchestrator | 2026-02-04 02:36:55.004369 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-04 02:36:55.004381 | orchestrator | Wednesday 04 February 2026 02:36:31 +0000 (0:00:00.150) 0:05:53.239 **** 2026-02-04 02:36:55.004393 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-04 02:36:55.004406 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-04 02:36:55.004419 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-04 02:36:55.004432 | orchestrator | 2026-02-04 02:36:55.004445 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-04 02:36:55.004458 | orchestrator | Wednesday 04 February 2026 02:36:37 +0000 (0:00:06.425) 0:05:59.665 **** 2026-02-04 02:36:55.004470 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-04 02:36:55.004484 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-04 02:36:55.004497 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-04 02:36:55.004510 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-04 02:36:55.004523 | orchestrator | 2026-02-04 02:36:55.004558 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 02:36:55.004569 | orchestrator | Wednesday 04 February 2026 02:36:42 +0000 (0:00:04.833) 0:06:04.498 **** 2026-02-04 02:36:55.004581 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:36:55.004594 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:36:55.004605 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:36:55.004618 | orchestrator | 2026-02-04 02:36:55.004630 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-04 02:36:55.004642 | orchestrator | Wednesday 04 February 2026 02:36:43 +0000 (0:00:00.891) 0:06:05.390 **** 2026-02-04 02:36:55.004654 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:36:55.004666 | orchestrator | 2026-02-04 02:36:55.004687 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-04 02:36:55.004698 | orchestrator | Wednesday 04 February 2026 02:36:44 +0000 (0:00:00.568) 0:06:05.958 **** 2026-02-04 02:36:55.004709 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:36:55.004722 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:36:55.004734 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:55.004747 | orchestrator | 2026-02-04 02:36:55.004765 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-04 02:36:55.004776 | orchestrator | Wednesday 04 February 2026 02:36:44 +0000 (0:00:00.335) 0:06:06.294 **** 2026-02-04 02:36:55.004788 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:36:55.004799 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:36:55.004810 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:36:55.004822 | orchestrator | 2026-02-04 02:36:55.004833 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-04 02:36:55.004845 | orchestrator | Wednesday 04 February 2026 02:36:45 +0000 (0:00:01.436) 0:06:07.730 **** 2026-02-04 02:36:55.004857 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 02:36:55.004869 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 02:36:55.004897 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 02:36:55.004909 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:36:55.004920 | orchestrator | 2026-02-04 02:36:55.004933 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-04 02:36:55.004945 | orchestrator | Wednesday 04 February 2026 02:36:46 +0000 (0:00:00.678) 0:06:08.409 **** 2026-02-04 02:36:55.004974 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:36:55.004986 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:36:55.004998 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:36:55.005009 | orchestrator | 2026-02-04 02:36:55.005020 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-04 02:36:55.005031 | orchestrator | 2026-02-04 02:36:55.005043 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 02:36:55.005054 | orchestrator | Wednesday 04 February 2026 02:36:47 +0000 (0:00:00.571) 0:06:08.981 **** 2026-02-04 02:36:55.005066 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:36:55.005080 | orchestrator | 2026-02-04 02:36:55.005091 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 02:36:55.005102 | orchestrator | Wednesday 04 February 2026 02:36:47 +0000 (0:00:00.752) 0:06:09.733 **** 2026-02-04 02:36:55.005113 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:36:55.005124 | orchestrator | 2026-02-04 02:36:55.005135 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 02:36:55.005146 | orchestrator | Wednesday 04 February 2026 02:36:48 +0000 (0:00:00.543) 0:06:10.277 **** 2026-02-04 02:36:55.005157 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:36:55.005168 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:36:55.005179 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:36:55.005190 | orchestrator | 2026-02-04 02:36:55.005201 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 02:36:55.005212 | orchestrator | Wednesday 04 February 2026 02:36:48 +0000 (0:00:00.320) 0:06:10.597 **** 2026-02-04 02:36:55.005224 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:36:55.005235 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:36:55.005246 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:36:55.005257 | orchestrator | 2026-02-04 02:36:55.005269 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 02:36:55.005279 | orchestrator | Wednesday 04 February 2026 02:36:49 +0000 (0:00:00.954) 0:06:11.551 **** 2026-02-04 02:36:55.005290 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:36:55.005302 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:36:55.005320 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:36:55.005332 | orchestrator | 2026-02-04 02:36:55.005343 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 02:36:55.005355 | orchestrator | Wednesday 04 February 2026 02:36:50 +0000 (0:00:00.703) 0:06:12.255 **** 2026-02-04 02:36:55.005367 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:36:55.005378 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:36:55.005389 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:36:55.005400 | orchestrator | 2026-02-04 02:36:55.005412 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 02:36:55.005422 | orchestrator | Wednesday 04 February 2026 02:36:51 +0000 (0:00:00.681) 0:06:12.937 **** 2026-02-04 02:36:55.005434 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:36:55.005445 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:36:55.005456 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:36:55.005466 | orchestrator | 2026-02-04 02:36:55.005478 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 02:36:55.005490 | orchestrator | Wednesday 04 February 2026 02:36:51 +0000 (0:00:00.580) 0:06:13.517 **** 2026-02-04 02:36:55.005501 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:36:55.005512 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:36:55.005523 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:36:55.005561 | orchestrator | 2026-02-04 02:36:55.005572 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 02:36:55.005584 | orchestrator | Wednesday 04 February 2026 02:36:51 +0000 (0:00:00.336) 0:06:13.854 **** 2026-02-04 02:36:55.005594 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:36:55.005606 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:36:55.005617 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:36:55.005628 | orchestrator | 2026-02-04 02:36:55.005640 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 02:36:55.005652 | orchestrator | Wednesday 04 February 2026 02:36:52 +0000 (0:00:00.341) 0:06:14.196 **** 2026-02-04 02:36:55.005664 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:36:55.005675 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:36:55.005687 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:36:55.005698 | orchestrator | 2026-02-04 02:36:55.005709 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 02:36:55.005720 | orchestrator | Wednesday 04 February 2026 02:36:53 +0000 (0:00:00.722) 0:06:14.918 **** 2026-02-04 02:36:55.005731 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:36:55.005742 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:36:55.005753 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:36:55.005763 | orchestrator | 2026-02-04 02:36:55.005774 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 02:36:55.005785 | orchestrator | Wednesday 04 February 2026 02:36:53 +0000 (0:00:00.921) 0:06:15.839 **** 2026-02-04 02:36:55.005796 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:36:55.005807 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:36:55.005817 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:36:55.005828 | orchestrator | 2026-02-04 02:36:55.005839 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 02:36:55.005850 | orchestrator | Wednesday 04 February 2026 02:36:54 +0000 (0:00:00.329) 0:06:16.169 **** 2026-02-04 02:36:55.005862 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:36:55.005873 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:36:55.005883 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:36:55.005894 | orchestrator | 2026-02-04 02:36:55.005905 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 02:36:55.005923 | orchestrator | Wednesday 04 February 2026 02:36:54 +0000 (0:00:00.335) 0:06:16.504 **** 2026-02-04 02:36:55.005934 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:36:55.005945 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:36:55.005956 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:36:55.005967 | orchestrator | 2026-02-04 02:36:55.005985 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 02:36:55.006004 | orchestrator | Wednesday 04 February 2026 02:36:54 +0000 (0:00:00.348) 0:06:16.852 **** 2026-02-04 02:37:53.062985 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:37:53.063081 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:37:53.063092 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:37:53.063100 | orchestrator | 2026-02-04 02:37:53.063107 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 02:37:53.063115 | orchestrator | Wednesday 04 February 2026 02:36:55 +0000 (0:00:00.610) 0:06:17.463 **** 2026-02-04 02:37:53.063122 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:37:53.063128 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:37:53.063135 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:37:53.063141 | orchestrator | 2026-02-04 02:37:53.063148 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 02:37:53.063154 | orchestrator | Wednesday 04 February 2026 02:36:55 +0000 (0:00:00.347) 0:06:17.810 **** 2026-02-04 02:37:53.063161 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:37:53.063168 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:37:53.063174 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:37:53.063180 | orchestrator | 2026-02-04 02:37:53.063186 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 02:37:53.063193 | orchestrator | Wednesday 04 February 2026 02:36:56 +0000 (0:00:00.344) 0:06:18.155 **** 2026-02-04 02:37:53.063199 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:37:53.063205 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:37:53.063212 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:37:53.063218 | orchestrator | 2026-02-04 02:37:53.063224 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 02:37:53.063230 | orchestrator | Wednesday 04 February 2026 02:36:56 +0000 (0:00:00.313) 0:06:18.469 **** 2026-02-04 02:37:53.063237 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:37:53.063243 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:37:53.063249 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:37:53.063255 | orchestrator | 2026-02-04 02:37:53.063262 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 02:37:53.063268 | orchestrator | Wednesday 04 February 2026 02:36:57 +0000 (0:00:00.580) 0:06:19.050 **** 2026-02-04 02:37:53.063274 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:37:53.063281 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:37:53.063287 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:37:53.063293 | orchestrator | 2026-02-04 02:37:53.063300 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 02:37:53.063306 | orchestrator | Wednesday 04 February 2026 02:36:57 +0000 (0:00:00.361) 0:06:19.411 **** 2026-02-04 02:37:53.063312 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:37:53.063319 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:37:53.063325 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:37:53.063331 | orchestrator | 2026-02-04 02:37:53.063337 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-04 02:37:53.063344 | orchestrator | Wednesday 04 February 2026 02:36:58 +0000 (0:00:00.591) 0:06:20.002 **** 2026-02-04 02:37:53.063350 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:37:53.063356 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:37:53.063363 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:37:53.063369 | orchestrator | 2026-02-04 02:37:53.063375 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-04 02:37:53.063382 | orchestrator | Wednesday 04 February 2026 02:36:58 +0000 (0:00:00.585) 0:06:20.588 **** 2026-02-04 02:37:53.063388 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:37:53.063395 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:37:53.063401 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:37:53.063422 | orchestrator | 2026-02-04 02:37:53.063429 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-04 02:37:53.063435 | orchestrator | Wednesday 04 February 2026 02:36:59 +0000 (0:00:00.707) 0:06:21.295 **** 2026-02-04 02:37:53.063442 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:37:53.063449 | orchestrator | 2026-02-04 02:37:53.063455 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-04 02:37:53.063461 | orchestrator | Wednesday 04 February 2026 02:37:00 +0000 (0:00:00.580) 0:06:21.875 **** 2026-02-04 02:37:53.063467 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:37:53.063474 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:37:53.063480 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:37:53.063486 | orchestrator | 2026-02-04 02:37:53.063492 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-04 02:37:53.063499 | orchestrator | Wednesday 04 February 2026 02:37:00 +0000 (0:00:00.331) 0:06:22.207 **** 2026-02-04 02:37:53.063505 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:37:53.063511 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:37:53.063517 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:37:53.063525 | orchestrator | 2026-02-04 02:37:53.063533 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-04 02:37:53.063540 | orchestrator | Wednesday 04 February 2026 02:37:00 +0000 (0:00:00.601) 0:06:22.809 **** 2026-02-04 02:37:53.063548 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:37:53.063604 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:37:53.063612 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:37:53.063620 | orchestrator | 2026-02-04 02:37:53.063627 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-04 02:37:53.063634 | orchestrator | Wednesday 04 February 2026 02:37:01 +0000 (0:00:00.652) 0:06:23.462 **** 2026-02-04 02:37:53.063642 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:37:53.063649 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:37:53.063656 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:37:53.063664 | orchestrator | 2026-02-04 02:37:53.063679 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-04 02:37:53.063687 | orchestrator | Wednesday 04 February 2026 02:37:01 +0000 (0:00:00.339) 0:06:23.801 **** 2026-02-04 02:37:53.063695 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 02:37:53.063715 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 02:37:53.063724 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-04 02:37:53.063731 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 02:37:53.063739 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 02:37:53.063747 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-04 02:37:53.063755 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 02:37:53.063763 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 02:37:53.063770 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-04 02:37:53.063778 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 02:37:53.063785 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 02:37:53.063793 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-04 02:37:53.063800 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 02:37:53.063808 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 02:37:53.063820 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-04 02:37:53.063828 | orchestrator | 2026-02-04 02:37:53.063835 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-04 02:37:53.063842 | orchestrator | Wednesday 04 February 2026 02:37:04 +0000 (0:00:02.065) 0:06:25.867 **** 2026-02-04 02:37:53.063850 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:37:53.063857 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:37:53.063864 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:37:53.063872 | orchestrator | 2026-02-04 02:37:53.063880 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-04 02:37:53.063886 | orchestrator | Wednesday 04 February 2026 02:37:04 +0000 (0:00:00.604) 0:06:26.471 **** 2026-02-04 02:37:53.063893 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:37:53.063899 | orchestrator | 2026-02-04 02:37:53.063906 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-04 02:37:53.063912 | orchestrator | Wednesday 04 February 2026 02:37:05 +0000 (0:00:00.527) 0:06:26.999 **** 2026-02-04 02:37:53.063918 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 02:37:53.063925 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 02:37:53.063931 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-04 02:37:53.063937 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-04 02:37:53.063944 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-04 02:37:53.063950 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-04 02:37:53.063956 | orchestrator | 2026-02-04 02:37:53.063962 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-04 02:37:53.063969 | orchestrator | Wednesday 04 February 2026 02:37:06 +0000 (0:00:01.200) 0:06:28.199 **** 2026-02-04 02:37:53.063975 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:37:53.063981 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 02:37:53.063988 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 02:37:53.063994 | orchestrator | 2026-02-04 02:37:53.064000 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-04 02:37:53.064006 | orchestrator | Wednesday 04 February 2026 02:37:08 +0000 (0:00:01.956) 0:06:30.155 **** 2026-02-04 02:37:53.064013 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 02:37:53.064019 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 02:37:53.064026 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:37:53.064032 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 02:37:53.064038 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 02:37:53.064044 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:37:53.064050 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 02:37:53.064057 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 02:37:53.064063 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:37:53.064069 | orchestrator | 2026-02-04 02:37:53.064075 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-04 02:37:53.064082 | orchestrator | Wednesday 04 February 2026 02:37:09 +0000 (0:00:01.144) 0:06:31.300 **** 2026-02-04 02:37:53.064088 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:37:53.064094 | orchestrator | 2026-02-04 02:37:53.064100 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-04 02:37:53.064107 | orchestrator | Wednesday 04 February 2026 02:37:11 +0000 (0:00:02.094) 0:06:33.395 **** 2026-02-04 02:37:53.064116 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:37:53.064123 | orchestrator | 2026-02-04 02:37:53.064133 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-04 02:37:53.064139 | orchestrator | Wednesday 04 February 2026 02:37:12 +0000 (0:00:00.566) 0:06:33.962 **** 2026-02-04 02:37:53.064149 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}) 2026-02-04 02:38:30.669656 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}) 2026-02-04 02:38:30.669749 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}) 2026-02-04 02:38:30.669761 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}) 2026-02-04 02:38:30.669770 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}) 2026-02-04 02:38:30.669777 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}) 2026-02-04 02:38:30.669785 | orchestrator | 2026-02-04 02:38:30.669794 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-04 02:38:30.669803 | orchestrator | Wednesday 04 February 2026 02:37:53 +0000 (0:00:40.948) 0:07:14.911 **** 2026-02-04 02:38:30.669810 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.669818 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.669825 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:30.669832 | orchestrator | 2026-02-04 02:38:30.669840 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-04 02:38:30.669847 | orchestrator | Wednesday 04 February 2026 02:37:53 +0000 (0:00:00.332) 0:07:15.243 **** 2026-02-04 02:38:30.669855 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:38:30.669863 | orchestrator | 2026-02-04 02:38:30.669870 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-04 02:38:30.669877 | orchestrator | Wednesday 04 February 2026 02:37:53 +0000 (0:00:00.559) 0:07:15.803 **** 2026-02-04 02:38:30.669885 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:30.669893 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:30.669900 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:30.669908 | orchestrator | 2026-02-04 02:38:30.669915 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-04 02:38:30.669922 | orchestrator | Wednesday 04 February 2026 02:37:54 +0000 (0:00:00.988) 0:07:16.791 **** 2026-02-04 02:38:30.669931 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:30.669938 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:30.669946 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:30.669953 | orchestrator | 2026-02-04 02:38:30.669960 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-04 02:38:30.669968 | orchestrator | Wednesday 04 February 2026 02:37:57 +0000 (0:00:02.545) 0:07:19.337 **** 2026-02-04 02:38:30.669975 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:38:30.669983 | orchestrator | 2026-02-04 02:38:30.669991 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-04 02:38:30.669998 | orchestrator | Wednesday 04 February 2026 02:37:58 +0000 (0:00:00.808) 0:07:20.145 **** 2026-02-04 02:38:30.670006 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:38:30.670013 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:38:30.670067 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:38:30.670074 | orchestrator | 2026-02-04 02:38:30.670082 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-04 02:38:30.670089 | orchestrator | Wednesday 04 February 2026 02:37:59 +0000 (0:00:01.139) 0:07:21.284 **** 2026-02-04 02:38:30.670114 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:38:30.670122 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:38:30.670129 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:38:30.670136 | orchestrator | 2026-02-04 02:38:30.670144 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-04 02:38:30.670151 | orchestrator | Wednesday 04 February 2026 02:38:00 +0000 (0:00:01.161) 0:07:22.446 **** 2026-02-04 02:38:30.670158 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:38:30.670165 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:38:30.670174 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:38:30.670183 | orchestrator | 2026-02-04 02:38:30.670192 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-04 02:38:30.670200 | orchestrator | Wednesday 04 February 2026 02:38:02 +0000 (0:00:01.813) 0:07:24.259 **** 2026-02-04 02:38:30.670208 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670216 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.670224 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:30.670233 | orchestrator | 2026-02-04 02:38:30.670241 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-04 02:38:30.670250 | orchestrator | Wednesday 04 February 2026 02:38:02 +0000 (0:00:00.570) 0:07:24.830 **** 2026-02-04 02:38:30.670258 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670267 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.670275 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:30.670283 | orchestrator | 2026-02-04 02:38:30.670292 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-04 02:38:30.670301 | orchestrator | Wednesday 04 February 2026 02:38:03 +0000 (0:00:00.353) 0:07:25.184 **** 2026-02-04 02:38:30.670309 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 02:38:30.670330 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-02-04 02:38:30.670339 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-04 02:38:30.670347 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-04 02:38:30.670356 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-04 02:38:30.670364 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-02-04 02:38:30.670372 | orchestrator | 2026-02-04 02:38:30.670381 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-04 02:38:30.670402 | orchestrator | Wednesday 04 February 2026 02:38:04 +0000 (0:00:00.987) 0:07:26.172 **** 2026-02-04 02:38:30.670412 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-04 02:38:30.670420 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-04 02:38:30.670446 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-04 02:38:30.670454 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-04 02:38:30.670461 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-04 02:38:30.670469 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-04 02:38:30.670476 | orchestrator | 2026-02-04 02:38:30.670483 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-04 02:38:30.670490 | orchestrator | Wednesday 04 February 2026 02:38:06 +0000 (0:00:02.142) 0:07:28.315 **** 2026-02-04 02:38:30.670498 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-04 02:38:30.670505 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-02-04 02:38:30.670512 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-04 02:38:30.670519 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-04 02:38:30.670526 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-04 02:38:30.670533 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-02-04 02:38:30.670541 | orchestrator | 2026-02-04 02:38:30.670548 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-04 02:38:30.670555 | orchestrator | Wednesday 04 February 2026 02:38:10 +0000 (0:00:03.966) 0:07:32.281 **** 2026-02-04 02:38:30.670562 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670592 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.670608 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:38:30.670615 | orchestrator | 2026-02-04 02:38:30.670622 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-04 02:38:30.670630 | orchestrator | Wednesday 04 February 2026 02:38:13 +0000 (0:00:03.199) 0:07:35.481 **** 2026-02-04 02:38:30.670637 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670644 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.670652 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-04 02:38:30.670659 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:38:30.670666 | orchestrator | 2026-02-04 02:38:30.670674 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-04 02:38:30.670681 | orchestrator | Wednesday 04 February 2026 02:38:26 +0000 (0:00:12.494) 0:07:47.976 **** 2026-02-04 02:38:30.670688 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670695 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.670702 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:30.670710 | orchestrator | 2026-02-04 02:38:30.670717 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 02:38:30.670724 | orchestrator | Wednesday 04 February 2026 02:38:27 +0000 (0:00:01.105) 0:07:49.081 **** 2026-02-04 02:38:30.670732 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670739 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.670746 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:30.670753 | orchestrator | 2026-02-04 02:38:30.670760 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-04 02:38:30.670768 | orchestrator | Wednesday 04 February 2026 02:38:27 +0000 (0:00:00.348) 0:07:49.429 **** 2026-02-04 02:38:30.670775 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:38:30.670782 | orchestrator | 2026-02-04 02:38:30.670789 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-04 02:38:30.670796 | orchestrator | Wednesday 04 February 2026 02:38:28 +0000 (0:00:00.828) 0:07:50.258 **** 2026-02-04 02:38:30.670804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:38:30.670811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:38:30.670819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:38:30.670826 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670833 | orchestrator | 2026-02-04 02:38:30.670840 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-04 02:38:30.670847 | orchestrator | Wednesday 04 February 2026 02:38:28 +0000 (0:00:00.430) 0:07:50.689 **** 2026-02-04 02:38:30.670855 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670862 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.670869 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:30.670876 | orchestrator | 2026-02-04 02:38:30.670883 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-04 02:38:30.670891 | orchestrator | Wednesday 04 February 2026 02:38:29 +0000 (0:00:00.363) 0:07:51.052 **** 2026-02-04 02:38:30.670898 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670905 | orchestrator | 2026-02-04 02:38:30.670912 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-04 02:38:30.670919 | orchestrator | Wednesday 04 February 2026 02:38:29 +0000 (0:00:00.235) 0:07:51.287 **** 2026-02-04 02:38:30.670927 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670934 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:30.670941 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:30.670948 | orchestrator | 2026-02-04 02:38:30.670955 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-04 02:38:30.670963 | orchestrator | Wednesday 04 February 2026 02:38:29 +0000 (0:00:00.340) 0:07:51.628 **** 2026-02-04 02:38:30.670974 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.670982 | orchestrator | 2026-02-04 02:38:30.670993 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-04 02:38:30.671001 | orchestrator | Wednesday 04 February 2026 02:38:29 +0000 (0:00:00.231) 0:07:51.860 **** 2026-02-04 02:38:30.671008 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:30.671016 | orchestrator | 2026-02-04 02:38:30.671023 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-04 02:38:30.671030 | orchestrator | Wednesday 04 February 2026 02:38:30 +0000 (0:00:00.219) 0:07:52.079 **** 2026-02-04 02:38:30.671042 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.970517 | orchestrator | 2026-02-04 02:38:50.970669 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-04 02:38:50.970684 | orchestrator | Wednesday 04 February 2026 02:38:30 +0000 (0:00:00.439) 0:07:52.519 **** 2026-02-04 02:38:50.970694 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.970703 | orchestrator | 2026-02-04 02:38:50.970710 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-04 02:38:50.970718 | orchestrator | Wednesday 04 February 2026 02:38:30 +0000 (0:00:00.244) 0:07:52.764 **** 2026-02-04 02:38:50.970726 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.970734 | orchestrator | 2026-02-04 02:38:50.970742 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-04 02:38:50.970750 | orchestrator | Wednesday 04 February 2026 02:38:31 +0000 (0:00:00.241) 0:07:53.006 **** 2026-02-04 02:38:50.970758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:38:50.970767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:38:50.970775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:38:50.970782 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.970790 | orchestrator | 2026-02-04 02:38:50.970798 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-04 02:38:50.970806 | orchestrator | Wednesday 04 February 2026 02:38:31 +0000 (0:00:00.514) 0:07:53.520 **** 2026-02-04 02:38:50.970814 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.970822 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.970830 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.970838 | orchestrator | 2026-02-04 02:38:50.970846 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-04 02:38:50.970854 | orchestrator | Wednesday 04 February 2026 02:38:31 +0000 (0:00:00.333) 0:07:53.854 **** 2026-02-04 02:38:50.970862 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.970870 | orchestrator | 2026-02-04 02:38:50.970877 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-04 02:38:50.970885 | orchestrator | Wednesday 04 February 2026 02:38:32 +0000 (0:00:00.233) 0:07:54.087 **** 2026-02-04 02:38:50.970893 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.970901 | orchestrator | 2026-02-04 02:38:50.970908 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-04 02:38:50.970916 | orchestrator | 2026-02-04 02:38:50.970924 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 02:38:50.970932 | orchestrator | Wednesday 04 February 2026 02:38:33 +0000 (0:00:00.956) 0:07:55.043 **** 2026-02-04 02:38:50.970941 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:38:50.970951 | orchestrator | 2026-02-04 02:38:50.970959 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 02:38:50.970967 | orchestrator | Wednesday 04 February 2026 02:38:34 +0000 (0:00:01.279) 0:07:56.323 **** 2026-02-04 02:38:50.970975 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:38:50.971005 | orchestrator | 2026-02-04 02:38:50.971014 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 02:38:50.971021 | orchestrator | Wednesday 04 February 2026 02:38:35 +0000 (0:00:01.373) 0:07:57.696 **** 2026-02-04 02:38:50.971029 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.971037 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.971044 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.971052 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:38:50.971060 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:38:50.971068 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:38:50.971076 | orchestrator | 2026-02-04 02:38:50.971083 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 02:38:50.971091 | orchestrator | Wednesday 04 February 2026 02:38:36 +0000 (0:00:01.046) 0:07:58.743 **** 2026-02-04 02:38:50.971099 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:50.971107 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:50.971116 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.971124 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.971132 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.971140 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:50.971149 | orchestrator | 2026-02-04 02:38:50.971157 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 02:38:50.971165 | orchestrator | Wednesday 04 February 2026 02:38:37 +0000 (0:00:01.022) 0:07:59.765 **** 2026-02-04 02:38:50.971173 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.971181 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:50.971189 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.971197 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.971204 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:50.971212 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:50.971220 | orchestrator | 2026-02-04 02:38:50.971228 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 02:38:50.971236 | orchestrator | Wednesday 04 February 2026 02:38:38 +0000 (0:00:00.755) 0:08:00.520 **** 2026-02-04 02:38:50.971243 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:50.971251 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:50.971259 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.971266 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.971274 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.971282 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:50.971289 | orchestrator | 2026-02-04 02:38:50.971310 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 02:38:50.971318 | orchestrator | Wednesday 04 February 2026 02:38:39 +0000 (0:00:00.985) 0:08:01.506 **** 2026-02-04 02:38:50.971326 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.971333 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.971341 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.971348 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:38:50.971356 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:38:50.971381 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:38:50.971389 | orchestrator | 2026-02-04 02:38:50.971396 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 02:38:50.971404 | orchestrator | Wednesday 04 February 2026 02:38:41 +0000 (0:00:01.899) 0:08:03.405 **** 2026-02-04 02:38:50.971411 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.971419 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.971427 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.971434 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.971442 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.971450 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.971457 | orchestrator | 2026-02-04 02:38:50.971465 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 02:38:50.971472 | orchestrator | Wednesday 04 February 2026 02:38:42 +0000 (0:00:00.877) 0:08:04.283 **** 2026-02-04 02:38:50.971480 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.971494 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.971501 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.971509 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.971516 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.971523 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.971531 | orchestrator | 2026-02-04 02:38:50.971538 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 02:38:50.971546 | orchestrator | Wednesday 04 February 2026 02:38:43 +0000 (0:00:00.604) 0:08:04.888 **** 2026-02-04 02:38:50.971553 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:50.971561 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:50.971568 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:50.971576 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:38:50.971600 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:38:50.971608 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:38:50.971616 | orchestrator | 2026-02-04 02:38:50.971624 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 02:38:50.971631 | orchestrator | Wednesday 04 February 2026 02:38:44 +0000 (0:00:01.360) 0:08:06.249 **** 2026-02-04 02:38:50.971639 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:50.971646 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:50.971654 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:50.971661 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:38:50.971668 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:38:50.971676 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:38:50.971683 | orchestrator | 2026-02-04 02:38:50.971690 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 02:38:50.971698 | orchestrator | Wednesday 04 February 2026 02:38:45 +0000 (0:00:01.020) 0:08:07.270 **** 2026-02-04 02:38:50.971706 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.971713 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.971721 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.971728 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.971735 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.971743 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.971750 | orchestrator | 2026-02-04 02:38:50.971758 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 02:38:50.971765 | orchestrator | Wednesday 04 February 2026 02:38:46 +0000 (0:00:00.902) 0:08:08.173 **** 2026-02-04 02:38:50.971772 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.971780 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.971787 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.971794 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:38:50.971801 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:38:50.971808 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:38:50.971815 | orchestrator | 2026-02-04 02:38:50.971822 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 02:38:50.971829 | orchestrator | Wednesday 04 February 2026 02:38:46 +0000 (0:00:00.631) 0:08:08.804 **** 2026-02-04 02:38:50.971837 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:50.971844 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:50.971851 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:50.971858 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.971865 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.971872 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.971879 | orchestrator | 2026-02-04 02:38:50.971886 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 02:38:50.971894 | orchestrator | Wednesday 04 February 2026 02:38:47 +0000 (0:00:00.882) 0:08:09.687 **** 2026-02-04 02:38:50.971902 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:50.971910 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:50.971917 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:50.971925 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.971933 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.971946 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.971954 | orchestrator | 2026-02-04 02:38:50.971961 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 02:38:50.971969 | orchestrator | Wednesday 04 February 2026 02:38:48 +0000 (0:00:00.642) 0:08:10.330 **** 2026-02-04 02:38:50.971976 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:38:50.971984 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:38:50.971991 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:38:50.971998 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.972006 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.972014 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.972021 | orchestrator | 2026-02-04 02:38:50.972029 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 02:38:50.972036 | orchestrator | Wednesday 04 February 2026 02:38:49 +0000 (0:00:00.909) 0:08:11.240 **** 2026-02-04 02:38:50.972044 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.972051 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.972058 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.972066 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.972074 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:38:50.972082 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:38:50.972089 | orchestrator | 2026-02-04 02:38:50.972097 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 02:38:50.972105 | orchestrator | Wednesday 04 February 2026 02:38:49 +0000 (0:00:00.606) 0:08:11.846 **** 2026-02-04 02:38:50.972113 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:38:50.972120 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:38:50.972128 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:38:50.972135 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:38:50.972150 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:39:22.042709 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:39:22.042828 | orchestrator | 2026-02-04 02:39:22.042844 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 02:39:22.042856 | orchestrator | Wednesday 04 February 2026 02:38:50 +0000 (0:00:00.978) 0:08:12.824 **** 2026-02-04 02:39:22.042867 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:22.042877 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:22.042887 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:22.042896 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:39:22.042907 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:39:22.042917 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:39:22.042926 | orchestrator | 2026-02-04 02:39:22.042936 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 02:39:22.042946 | orchestrator | Wednesday 04 February 2026 02:38:51 +0000 (0:00:00.895) 0:08:13.719 **** 2026-02-04 02:39:22.042956 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.042965 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.042975 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.042984 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:39:22.042994 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:39:22.043004 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:39:22.043013 | orchestrator | 2026-02-04 02:39:22.043023 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 02:39:22.043033 | orchestrator | Wednesday 04 February 2026 02:38:52 +0000 (0:00:00.686) 0:08:14.406 **** 2026-02-04 02:39:22.043042 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.043052 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.043103 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.043114 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:39:22.043125 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:39:22.043135 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:39:22.043145 | orchestrator | 2026-02-04 02:39:22.043154 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-04 02:39:22.043164 | orchestrator | Wednesday 04 February 2026 02:38:53 +0000 (0:00:01.295) 0:08:15.702 **** 2026-02-04 02:39:22.043195 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:39:22.043205 | orchestrator | 2026-02-04 02:39:22.043215 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-04 02:39:22.043226 | orchestrator | Wednesday 04 February 2026 02:38:57 +0000 (0:00:03.775) 0:08:19.478 **** 2026-02-04 02:39:22.043237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:39:22.043249 | orchestrator | 2026-02-04 02:39:22.043287 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-04 02:39:22.043297 | orchestrator | Wednesday 04 February 2026 02:38:59 +0000 (0:00:01.936) 0:08:21.414 **** 2026-02-04 02:39:22.043306 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:22.043316 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:22.043326 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:22.043335 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:39:22.043345 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:39:22.043354 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:39:22.043364 | orchestrator | 2026-02-04 02:39:22.043374 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-04 02:39:22.043383 | orchestrator | Wednesday 04 February 2026 02:39:01 +0000 (0:00:01.825) 0:08:23.239 **** 2026-02-04 02:39:22.043393 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:22.043402 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:22.043412 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:22.043421 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:39:22.043431 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:39:22.043440 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:39:22.043450 | orchestrator | 2026-02-04 02:39:22.043459 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-04 02:39:22.043469 | orchestrator | Wednesday 04 February 2026 02:39:02 +0000 (0:00:01.017) 0:08:24.257 **** 2026-02-04 02:39:22.043480 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:39:22.043491 | orchestrator | 2026-02-04 02:39:22.043501 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-04 02:39:22.043510 | orchestrator | Wednesday 04 February 2026 02:39:03 +0000 (0:00:01.322) 0:08:25.579 **** 2026-02-04 02:39:22.043520 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:22.043530 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:22.043539 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:22.043549 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:39:22.043558 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:39:22.043568 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:39:22.043577 | orchestrator | 2026-02-04 02:39:22.043587 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-04 02:39:22.043596 | orchestrator | Wednesday 04 February 2026 02:39:05 +0000 (0:00:01.784) 0:08:27.363 **** 2026-02-04 02:39:22.043625 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:22.043636 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:22.043645 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:22.043655 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:39:22.043664 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:39:22.043674 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:39:22.043683 | orchestrator | 2026-02-04 02:39:22.043693 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-04 02:39:22.043702 | orchestrator | Wednesday 04 February 2026 02:39:08 +0000 (0:00:03.370) 0:08:30.734 **** 2026-02-04 02:39:22.043717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:39:22.043727 | orchestrator | 2026-02-04 02:39:22.043737 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-04 02:39:22.043754 | orchestrator | Wednesday 04 February 2026 02:39:10 +0000 (0:00:01.377) 0:08:32.112 **** 2026-02-04 02:39:22.043764 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.043773 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.043783 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.043792 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:39:22.043819 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:39:22.043829 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:39:22.043839 | orchestrator | 2026-02-04 02:39:22.043848 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-04 02:39:22.043858 | orchestrator | Wednesday 04 February 2026 02:39:11 +0000 (0:00:00.860) 0:08:32.972 **** 2026-02-04 02:39:22.043867 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:22.043877 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:22.043886 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:22.043896 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:39:22.043905 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:39:22.043915 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:39:22.043924 | orchestrator | 2026-02-04 02:39:22.043934 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-04 02:39:22.043943 | orchestrator | Wednesday 04 February 2026 02:39:13 +0000 (0:00:02.154) 0:08:35.127 **** 2026-02-04 02:39:22.043952 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.043962 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.043971 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.043981 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:39:22.043990 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:39:22.043999 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:39:22.044009 | orchestrator | 2026-02-04 02:39:22.044018 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-04 02:39:22.044028 | orchestrator | 2026-02-04 02:39:22.044037 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 02:39:22.044047 | orchestrator | Wednesday 04 February 2026 02:39:14 +0000 (0:00:01.175) 0:08:36.302 **** 2026-02-04 02:39:22.044057 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:39:22.044067 | orchestrator | 2026-02-04 02:39:22.044076 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 02:39:22.044086 | orchestrator | Wednesday 04 February 2026 02:39:15 +0000 (0:00:00.820) 0:08:37.123 **** 2026-02-04 02:39:22.044096 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:39:22.044105 | orchestrator | 2026-02-04 02:39:22.044115 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 02:39:22.044124 | orchestrator | Wednesday 04 February 2026 02:39:15 +0000 (0:00:00.578) 0:08:37.701 **** 2026-02-04 02:39:22.044134 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:22.044143 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:22.044152 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:22.044162 | orchestrator | 2026-02-04 02:39:22.044171 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 02:39:22.044181 | orchestrator | Wednesday 04 February 2026 02:39:16 +0000 (0:00:00.358) 0:08:38.060 **** 2026-02-04 02:39:22.044191 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.044200 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.044209 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.044219 | orchestrator | 2026-02-04 02:39:22.044229 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 02:39:22.044238 | orchestrator | Wednesday 04 February 2026 02:39:17 +0000 (0:00:01.045) 0:08:39.105 **** 2026-02-04 02:39:22.044248 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.044257 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.044267 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.044276 | orchestrator | 2026-02-04 02:39:22.044286 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 02:39:22.044301 | orchestrator | Wednesday 04 February 2026 02:39:17 +0000 (0:00:00.717) 0:08:39.822 **** 2026-02-04 02:39:22.044311 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.044320 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.044330 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.044339 | orchestrator | 2026-02-04 02:39:22.044349 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 02:39:22.044358 | orchestrator | Wednesday 04 February 2026 02:39:18 +0000 (0:00:00.767) 0:08:40.590 **** 2026-02-04 02:39:22.044368 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:22.044377 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:22.044387 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:22.044396 | orchestrator | 2026-02-04 02:39:22.044406 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 02:39:22.044415 | orchestrator | Wednesday 04 February 2026 02:39:19 +0000 (0:00:00.316) 0:08:40.906 **** 2026-02-04 02:39:22.044425 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:22.044434 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:22.044443 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:22.044453 | orchestrator | 2026-02-04 02:39:22.044462 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 02:39:22.044472 | orchestrator | Wednesday 04 February 2026 02:39:19 +0000 (0:00:00.587) 0:08:41.494 **** 2026-02-04 02:39:22.044481 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:22.044491 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:22.044500 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:22.044509 | orchestrator | 2026-02-04 02:39:22.044519 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 02:39:22.044529 | orchestrator | Wednesday 04 February 2026 02:39:19 +0000 (0:00:00.322) 0:08:41.817 **** 2026-02-04 02:39:22.044538 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.044548 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.044557 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.044566 | orchestrator | 2026-02-04 02:39:22.044576 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 02:39:22.044590 | orchestrator | Wednesday 04 February 2026 02:39:20 +0000 (0:00:00.698) 0:08:42.515 **** 2026-02-04 02:39:22.044600 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:22.044637 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:22.044654 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:22.044670 | orchestrator | 2026-02-04 02:39:22.044687 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 02:39:22.044703 | orchestrator | Wednesday 04 February 2026 02:39:21 +0000 (0:00:00.763) 0:08:43.279 **** 2026-02-04 02:39:22.044714 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:22.044724 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:22.044740 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:52.944435 | orchestrator | 2026-02-04 02:39:52.944554 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 02:39:52.944571 | orchestrator | Wednesday 04 February 2026 02:39:22 +0000 (0:00:00.613) 0:08:43.893 **** 2026-02-04 02:39:52.944583 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:52.944595 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:52.944606 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:52.944617 | orchestrator | 2026-02-04 02:39:52.944683 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 02:39:52.944696 | orchestrator | Wednesday 04 February 2026 02:39:22 +0000 (0:00:00.345) 0:08:44.238 **** 2026-02-04 02:39:52.944707 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:52.944719 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:52.944730 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:52.944741 | orchestrator | 2026-02-04 02:39:52.944752 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 02:39:52.944763 | orchestrator | Wednesday 04 February 2026 02:39:22 +0000 (0:00:00.380) 0:08:44.618 **** 2026-02-04 02:39:52.944799 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:52.944811 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:52.944823 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:52.944834 | orchestrator | 2026-02-04 02:39:52.944845 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 02:39:52.944856 | orchestrator | Wednesday 04 February 2026 02:39:23 +0000 (0:00:00.635) 0:08:45.254 **** 2026-02-04 02:39:52.944867 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:52.944877 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:52.944888 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:52.944899 | orchestrator | 2026-02-04 02:39:52.944910 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 02:39:52.944921 | orchestrator | Wednesday 04 February 2026 02:39:23 +0000 (0:00:00.377) 0:08:45.632 **** 2026-02-04 02:39:52.944932 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:52.944943 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:52.944953 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:52.944964 | orchestrator | 2026-02-04 02:39:52.944977 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 02:39:52.944990 | orchestrator | Wednesday 04 February 2026 02:39:24 +0000 (0:00:00.340) 0:08:45.972 **** 2026-02-04 02:39:52.945003 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:52.945016 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:52.945029 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:52.945042 | orchestrator | 2026-02-04 02:39:52.945055 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 02:39:52.945067 | orchestrator | Wednesday 04 February 2026 02:39:24 +0000 (0:00:00.346) 0:08:46.318 **** 2026-02-04 02:39:52.945080 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:52.945092 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:52.945105 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:52.945117 | orchestrator | 2026-02-04 02:39:52.945129 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 02:39:52.945141 | orchestrator | Wednesday 04 February 2026 02:39:25 +0000 (0:00:00.625) 0:08:46.944 **** 2026-02-04 02:39:52.945155 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:52.945168 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:52.945182 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:52.945195 | orchestrator | 2026-02-04 02:39:52.945208 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 02:39:52.945220 | orchestrator | Wednesday 04 February 2026 02:39:25 +0000 (0:00:00.368) 0:08:47.313 **** 2026-02-04 02:39:52.945234 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:39:52.945246 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:39:52.945259 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:39:52.945272 | orchestrator | 2026-02-04 02:39:52.945285 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-04 02:39:52.945298 | orchestrator | Wednesday 04 February 2026 02:39:26 +0000 (0:00:00.552) 0:08:47.866 **** 2026-02-04 02:39:52.945311 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:52.945323 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:52.945336 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-04 02:39:52.945349 | orchestrator | 2026-02-04 02:39:52.945360 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-04 02:39:52.945371 | orchestrator | Wednesday 04 February 2026 02:39:26 +0000 (0:00:00.655) 0:08:48.521 **** 2026-02-04 02:39:52.945382 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:39:52.945393 | orchestrator | 2026-02-04 02:39:52.945403 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-04 02:39:52.945414 | orchestrator | Wednesday 04 February 2026 02:39:28 +0000 (0:00:02.089) 0:08:50.610 **** 2026-02-04 02:39:52.945428 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-04 02:39:52.945449 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:52.945460 | orchestrator | 2026-02-04 02:39:52.945471 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-04 02:39:52.945482 | orchestrator | Wednesday 04 February 2026 02:39:28 +0000 (0:00:00.240) 0:08:50.850 **** 2026-02-04 02:39:52.945510 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 02:39:52.945548 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 02:39:52.945561 | orchestrator | 2026-02-04 02:39:52.945572 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-04 02:39:52.945583 | orchestrator | Wednesday 04 February 2026 02:39:35 +0000 (0:00:06.694) 0:08:57.544 **** 2026-02-04 02:39:52.945594 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 02:39:52.945605 | orchestrator | 2026-02-04 02:39:52.945616 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-04 02:39:52.945695 | orchestrator | Wednesday 04 February 2026 02:39:38 +0000 (0:00:03.307) 0:09:00.852 **** 2026-02-04 02:39:52.945709 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:39:52.945721 | orchestrator | 2026-02-04 02:39:52.945732 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-04 02:39:52.945743 | orchestrator | Wednesday 04 February 2026 02:39:39 +0000 (0:00:00.783) 0:09:01.635 **** 2026-02-04 02:39:52.945754 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 02:39:52.945765 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 02:39:52.945776 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-04 02:39:52.945787 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-04 02:39:52.945798 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-04 02:39:52.945809 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-04 02:39:52.945820 | orchestrator | 2026-02-04 02:39:52.945830 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-04 02:39:52.945841 | orchestrator | Wednesday 04 February 2026 02:39:40 +0000 (0:00:01.053) 0:09:02.688 **** 2026-02-04 02:39:52.945852 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:39:52.945863 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 02:39:52.945874 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 02:39:52.945885 | orchestrator | 2026-02-04 02:39:52.945896 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-04 02:39:52.945906 | orchestrator | Wednesday 04 February 2026 02:39:42 +0000 (0:00:02.021) 0:09:04.710 **** 2026-02-04 02:39:52.945917 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 02:39:52.945929 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 02:39:52.945940 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:52.945951 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 02:39:52.945962 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 02:39:52.945973 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:52.945984 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 02:39:52.945995 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 02:39:52.946015 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:52.946080 | orchestrator | 2026-02-04 02:39:52.946091 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-04 02:39:52.946103 | orchestrator | Wednesday 04 February 2026 02:39:44 +0000 (0:00:01.274) 0:09:05.985 **** 2026-02-04 02:39:52.946114 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:52.946125 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:52.946136 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:52.946147 | orchestrator | 2026-02-04 02:39:52.946157 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-04 02:39:52.946169 | orchestrator | Wednesday 04 February 2026 02:39:46 +0000 (0:00:02.681) 0:09:08.666 **** 2026-02-04 02:39:52.946180 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:39:52.946191 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:39:52.946201 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:39:52.946212 | orchestrator | 2026-02-04 02:39:52.946224 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-04 02:39:52.946234 | orchestrator | Wednesday 04 February 2026 02:39:47 +0000 (0:00:00.616) 0:09:09.283 **** 2026-02-04 02:39:52.946245 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:39:52.946257 | orchestrator | 2026-02-04 02:39:52.946268 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-04 02:39:52.946279 | orchestrator | Wednesday 04 February 2026 02:39:47 +0000 (0:00:00.543) 0:09:09.826 **** 2026-02-04 02:39:52.946290 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:39:52.946301 | orchestrator | 2026-02-04 02:39:52.946312 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-04 02:39:52.946323 | orchestrator | Wednesday 04 February 2026 02:39:48 +0000 (0:00:00.818) 0:09:10.645 **** 2026-02-04 02:39:52.946334 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:52.946345 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:52.946356 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:52.946366 | orchestrator | 2026-02-04 02:39:52.946377 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-04 02:39:52.946395 | orchestrator | Wednesday 04 February 2026 02:39:50 +0000 (0:00:01.272) 0:09:11.918 **** 2026-02-04 02:39:52.946406 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:52.946417 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:52.946428 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:52.946439 | orchestrator | 2026-02-04 02:39:52.946450 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-04 02:39:52.946461 | orchestrator | Wednesday 04 February 2026 02:39:51 +0000 (0:00:01.126) 0:09:13.044 **** 2026-02-04 02:39:52.946472 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:39:52.946483 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:39:52.946494 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:39:52.946505 | orchestrator | 2026-02-04 02:39:52.946525 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-04 02:40:13.275231 | orchestrator | Wednesday 04 February 2026 02:39:52 +0000 (0:00:01.750) 0:09:14.795 **** 2026-02-04 02:40:13.275348 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:40:13.275367 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:40:13.275380 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:40:13.275392 | orchestrator | 2026-02-04 02:40:13.275404 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-04 02:40:13.275416 | orchestrator | Wednesday 04 February 2026 02:39:55 +0000 (0:00:02.311) 0:09:17.106 **** 2026-02-04 02:40:13.275427 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.275439 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.275449 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.275460 | orchestrator | 2026-02-04 02:40:13.275471 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 02:40:13.275506 | orchestrator | Wednesday 04 February 2026 02:39:56 +0000 (0:00:01.490) 0:09:18.597 **** 2026-02-04 02:40:13.275517 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:40:13.275528 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:40:13.275539 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:40:13.275549 | orchestrator | 2026-02-04 02:40:13.275560 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-04 02:40:13.275571 | orchestrator | Wednesday 04 February 2026 02:39:57 +0000 (0:00:00.694) 0:09:19.291 **** 2026-02-04 02:40:13.275582 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:40:13.275593 | orchestrator | 2026-02-04 02:40:13.275604 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-04 02:40:13.275615 | orchestrator | Wednesday 04 February 2026 02:39:57 +0000 (0:00:00.558) 0:09:19.849 **** 2026-02-04 02:40:13.275626 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.275637 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.275696 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.275708 | orchestrator | 2026-02-04 02:40:13.275719 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-04 02:40:13.275729 | orchestrator | Wednesday 04 February 2026 02:39:58 +0000 (0:00:00.608) 0:09:20.458 **** 2026-02-04 02:40:13.275740 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:40:13.275751 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:40:13.275762 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:40:13.275778 | orchestrator | 2026-02-04 02:40:13.275798 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-04 02:40:13.275816 | orchestrator | Wednesday 04 February 2026 02:39:59 +0000 (0:00:01.185) 0:09:21.644 **** 2026-02-04 02:40:13.275833 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:40:13.275849 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:40:13.275869 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:40:13.275889 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.275908 | orchestrator | 2026-02-04 02:40:13.275928 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-04 02:40:13.275948 | orchestrator | Wednesday 04 February 2026 02:40:00 +0000 (0:00:00.696) 0:09:22.340 **** 2026-02-04 02:40:13.275966 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.275986 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.276004 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.276024 | orchestrator | 2026-02-04 02:40:13.276044 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-04 02:40:13.276063 | orchestrator | 2026-02-04 02:40:13.276083 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 02:40:13.276103 | orchestrator | Wednesday 04 February 2026 02:40:01 +0000 (0:00:00.617) 0:09:22.958 **** 2026-02-04 02:40:13.276123 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:40:13.276144 | orchestrator | 2026-02-04 02:40:13.276164 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 02:40:13.276184 | orchestrator | Wednesday 04 February 2026 02:40:01 +0000 (0:00:00.816) 0:09:23.775 **** 2026-02-04 02:40:13.276203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:40:13.276223 | orchestrator | 2026-02-04 02:40:13.276238 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 02:40:13.276254 | orchestrator | Wednesday 04 February 2026 02:40:02 +0000 (0:00:00.587) 0:09:24.362 **** 2026-02-04 02:40:13.276269 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.276285 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.276302 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.276334 | orchestrator | 2026-02-04 02:40:13.276352 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 02:40:13.276369 | orchestrator | Wednesday 04 February 2026 02:40:03 +0000 (0:00:00.573) 0:09:24.936 **** 2026-02-04 02:40:13.276387 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.276405 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.276421 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.276438 | orchestrator | 2026-02-04 02:40:13.276456 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 02:40:13.276473 | orchestrator | Wednesday 04 February 2026 02:40:03 +0000 (0:00:00.719) 0:09:25.656 **** 2026-02-04 02:40:13.276490 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.276527 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.276549 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.276568 | orchestrator | 2026-02-04 02:40:13.276586 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 02:40:13.276602 | orchestrator | Wednesday 04 February 2026 02:40:04 +0000 (0:00:00.729) 0:09:26.386 **** 2026-02-04 02:40:13.276614 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.276624 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.276635 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.276677 | orchestrator | 2026-02-04 02:40:13.276696 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 02:40:13.276708 | orchestrator | Wednesday 04 February 2026 02:40:05 +0000 (0:00:00.735) 0:09:27.121 **** 2026-02-04 02:40:13.276741 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.276752 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.276763 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.276774 | orchestrator | 2026-02-04 02:40:13.276785 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 02:40:13.276796 | orchestrator | Wednesday 04 February 2026 02:40:05 +0000 (0:00:00.630) 0:09:27.751 **** 2026-02-04 02:40:13.276806 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.276817 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.276828 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.276838 | orchestrator | 2026-02-04 02:40:13.276849 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 02:40:13.276860 | orchestrator | Wednesday 04 February 2026 02:40:06 +0000 (0:00:00.334) 0:09:28.086 **** 2026-02-04 02:40:13.276870 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.276881 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.276892 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.276902 | orchestrator | 2026-02-04 02:40:13.276913 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 02:40:13.276924 | orchestrator | Wednesday 04 February 2026 02:40:06 +0000 (0:00:00.342) 0:09:28.429 **** 2026-02-04 02:40:13.276934 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.276945 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.276956 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.276966 | orchestrator | 2026-02-04 02:40:13.276977 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 02:40:13.276988 | orchestrator | Wednesday 04 February 2026 02:40:07 +0000 (0:00:00.726) 0:09:29.155 **** 2026-02-04 02:40:13.276998 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.277009 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.277020 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.277030 | orchestrator | 2026-02-04 02:40:13.277041 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 02:40:13.277052 | orchestrator | Wednesday 04 February 2026 02:40:08 +0000 (0:00:01.052) 0:09:30.208 **** 2026-02-04 02:40:13.277062 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.277073 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.277084 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.277094 | orchestrator | 2026-02-04 02:40:13.277105 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 02:40:13.277126 | orchestrator | Wednesday 04 February 2026 02:40:08 +0000 (0:00:00.314) 0:09:30.522 **** 2026-02-04 02:40:13.277137 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.277147 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.277158 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.277169 | orchestrator | 2026-02-04 02:40:13.277180 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 02:40:13.277190 | orchestrator | Wednesday 04 February 2026 02:40:08 +0000 (0:00:00.323) 0:09:30.845 **** 2026-02-04 02:40:13.277201 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.277212 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.277223 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.277233 | orchestrator | 2026-02-04 02:40:13.277244 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 02:40:13.277255 | orchestrator | Wednesday 04 February 2026 02:40:09 +0000 (0:00:00.621) 0:09:31.467 **** 2026-02-04 02:40:13.277265 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.277276 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.277287 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.277297 | orchestrator | 2026-02-04 02:40:13.277308 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 02:40:13.277319 | orchestrator | Wednesday 04 February 2026 02:40:09 +0000 (0:00:00.344) 0:09:31.811 **** 2026-02-04 02:40:13.277329 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.277340 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.277351 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.277361 | orchestrator | 2026-02-04 02:40:13.277372 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 02:40:13.277383 | orchestrator | Wednesday 04 February 2026 02:40:10 +0000 (0:00:00.357) 0:09:32.169 **** 2026-02-04 02:40:13.277394 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.277405 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.277415 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.277426 | orchestrator | 2026-02-04 02:40:13.277437 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 02:40:13.277447 | orchestrator | Wednesday 04 February 2026 02:40:10 +0000 (0:00:00.318) 0:09:32.488 **** 2026-02-04 02:40:13.277458 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.277469 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.277479 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.277490 | orchestrator | 2026-02-04 02:40:13.277501 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 02:40:13.277512 | orchestrator | Wednesday 04 February 2026 02:40:11 +0000 (0:00:00.583) 0:09:33.071 **** 2026-02-04 02:40:13.277522 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:13.277533 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:13.277544 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:13.277554 | orchestrator | 2026-02-04 02:40:13.277565 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 02:40:13.277576 | orchestrator | Wednesday 04 February 2026 02:40:11 +0000 (0:00:00.322) 0:09:33.393 **** 2026-02-04 02:40:13.277586 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.277597 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.277608 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.277619 | orchestrator | 2026-02-04 02:40:13.277636 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 02:40:13.277698 | orchestrator | Wednesday 04 February 2026 02:40:11 +0000 (0:00:00.340) 0:09:33.734 **** 2026-02-04 02:40:13.277710 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:40:13.277721 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:40:13.277732 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:40:13.277742 | orchestrator | 2026-02-04 02:40:13.277753 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-04 02:40:13.277764 | orchestrator | Wednesday 04 February 2026 02:40:12 +0000 (0:00:00.822) 0:09:34.556 **** 2026-02-04 02:40:13.277791 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:40:59.817517 | orchestrator | 2026-02-04 02:40:59.817635 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-04 02:40:59.817652 | orchestrator | Wednesday 04 February 2026 02:40:13 +0000 (0:00:00.569) 0:09:35.126 **** 2026-02-04 02:40:59.817664 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:40:59.817676 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 02:40:59.817755 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 02:40:59.817767 | orchestrator | 2026-02-04 02:40:59.817778 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-04 02:40:59.817789 | orchestrator | Wednesday 04 February 2026 02:40:15 +0000 (0:00:02.058) 0:09:37.184 **** 2026-02-04 02:40:59.817800 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 02:40:59.817812 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-04 02:40:59.817823 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:40:59.817834 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 02:40:59.817845 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-04 02:40:59.817856 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:40:59.817867 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 02:40:59.817877 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-04 02:40:59.817888 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:40:59.817899 | orchestrator | 2026-02-04 02:40:59.817910 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-04 02:40:59.817921 | orchestrator | Wednesday 04 February 2026 02:40:16 +0000 (0:00:01.191) 0:09:38.376 **** 2026-02-04 02:40:59.817931 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:59.817942 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:59.817953 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:59.817964 | orchestrator | 2026-02-04 02:40:59.817975 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-04 02:40:59.817986 | orchestrator | Wednesday 04 February 2026 02:40:17 +0000 (0:00:00.650) 0:09:39.027 **** 2026-02-04 02:40:59.817997 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:40:59.818009 | orchestrator | 2026-02-04 02:40:59.818079 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-04 02:40:59.818093 | orchestrator | Wednesday 04 February 2026 02:40:17 +0000 (0:00:00.556) 0:09:39.583 **** 2026-02-04 02:40:59.818109 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 02:40:59.818124 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 02:40:59.818137 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 02:40:59.818149 | orchestrator | 2026-02-04 02:40:59.818163 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-04 02:40:59.818176 | orchestrator | Wednesday 04 February 2026 02:40:18 +0000 (0:00:00.832) 0:09:40.415 **** 2026-02-04 02:40:59.818189 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:40:59.818202 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 02:40:59.818214 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:40:59.818227 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 02:40:59.818265 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:40:59.818279 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-04 02:40:59.818292 | orchestrator | 2026-02-04 02:40:59.818305 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-04 02:40:59.818317 | orchestrator | Wednesday 04 February 2026 02:40:22 +0000 (0:00:04.417) 0:09:44.833 **** 2026-02-04 02:40:59.818330 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:40:59.818342 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 02:40:59.818355 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:40:59.818369 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 02:40:59.818381 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:40:59.818409 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 02:40:59.818420 | orchestrator | 2026-02-04 02:40:59.818431 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-04 02:40:59.818441 | orchestrator | Wednesday 04 February 2026 02:40:25 +0000 (0:00:02.243) 0:09:47.077 **** 2026-02-04 02:40:59.818452 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 02:40:59.818463 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:40:59.818474 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 02:40:59.818485 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:40:59.818496 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 02:40:59.818507 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:40:59.818518 | orchestrator | 2026-02-04 02:40:59.818547 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-04 02:40:59.818559 | orchestrator | Wednesday 04 February 2026 02:40:26 +0000 (0:00:01.209) 0:09:48.286 **** 2026-02-04 02:40:59.818570 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-04 02:40:59.818581 | orchestrator | 2026-02-04 02:40:59.818592 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-04 02:40:59.818603 | orchestrator | Wednesday 04 February 2026 02:40:26 +0000 (0:00:00.222) 0:09:48.508 **** 2026-02-04 02:40:59.818613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818669 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:59.818700 | orchestrator | 2026-02-04 02:40:59.818712 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-04 02:40:59.818723 | orchestrator | Wednesday 04 February 2026 02:40:27 +0000 (0:00:00.890) 0:09:49.399 **** 2026-02-04 02:40:59.818734 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 02:40:59.818797 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:59.818809 | orchestrator | 2026-02-04 02:40:59.818820 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-04 02:40:59.818830 | orchestrator | Wednesday 04 February 2026 02:40:28 +0000 (0:00:01.151) 0:09:50.551 **** 2026-02-04 02:40:59.818841 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 02:40:59.818852 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 02:40:59.818863 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 02:40:59.818874 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 02:40:59.818885 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-04 02:40:59.818896 | orchestrator | 2026-02-04 02:40:59.818907 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-04 02:40:59.818917 | orchestrator | Wednesday 04 February 2026 02:40:57 +0000 (0:00:29.083) 0:10:19.635 **** 2026-02-04 02:40:59.818928 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:59.818939 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:59.818949 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:59.818960 | orchestrator | 2026-02-04 02:40:59.818971 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-04 02:40:59.818981 | orchestrator | Wednesday 04 February 2026 02:40:58 +0000 (0:00:00.318) 0:10:19.953 **** 2026-02-04 02:40:59.818992 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:40:59.819003 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:40:59.819013 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:40:59.819024 | orchestrator | 2026-02-04 02:40:59.819041 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-04 02:40:59.819052 | orchestrator | Wednesday 04 February 2026 02:40:58 +0000 (0:00:00.335) 0:10:20.289 **** 2026-02-04 02:40:59.819063 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:40:59.819074 | orchestrator | 2026-02-04 02:40:59.819085 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-04 02:40:59.819096 | orchestrator | Wednesday 04 February 2026 02:40:59 +0000 (0:00:00.811) 0:10:21.101 **** 2026-02-04 02:40:59.819113 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:41:10.639250 | orchestrator | 2026-02-04 02:41:10.639343 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-04 02:41:10.639356 | orchestrator | Wednesday 04 February 2026 02:40:59 +0000 (0:00:00.566) 0:10:21.668 **** 2026-02-04 02:41:10.639366 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:41:10.639377 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:41:10.639385 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:41:10.639393 | orchestrator | 2026-02-04 02:41:10.639402 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-04 02:41:10.639410 | orchestrator | Wednesday 04 February 2026 02:41:01 +0000 (0:00:01.283) 0:10:22.951 **** 2026-02-04 02:41:10.639438 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:41:10.639446 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:41:10.639454 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:41:10.639462 | orchestrator | 2026-02-04 02:41:10.639470 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-04 02:41:10.639478 | orchestrator | Wednesday 04 February 2026 02:41:02 +0000 (0:00:01.401) 0:10:24.352 **** 2026-02-04 02:41:10.639486 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:41:10.639494 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:41:10.639502 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:41:10.639510 | orchestrator | 2026-02-04 02:41:10.639518 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-04 02:41:10.639526 | orchestrator | Wednesday 04 February 2026 02:41:04 +0000 (0:00:01.793) 0:10:26.145 **** 2026-02-04 02:41:10.639534 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-04 02:41:10.639544 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-04 02:41:10.639552 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-04 02:41:10.639560 | orchestrator | 2026-02-04 02:41:10.639568 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-04 02:41:10.639576 | orchestrator | Wednesday 04 February 2026 02:41:07 +0000 (0:00:02.719) 0:10:28.865 **** 2026-02-04 02:41:10.639584 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:10.639592 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:10.639600 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:10.639607 | orchestrator | 2026-02-04 02:41:10.639615 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-04 02:41:10.639623 | orchestrator | Wednesday 04 February 2026 02:41:07 +0000 (0:00:00.365) 0:10:29.231 **** 2026-02-04 02:41:10.639631 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:41:10.639640 | orchestrator | 2026-02-04 02:41:10.639647 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-04 02:41:10.639655 | orchestrator | Wednesday 04 February 2026 02:41:07 +0000 (0:00:00.541) 0:10:29.772 **** 2026-02-04 02:41:10.639663 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:10.639673 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:10.639753 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:10.639769 | orchestrator | 2026-02-04 02:41:10.639781 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-04 02:41:10.639796 | orchestrator | Wednesday 04 February 2026 02:41:08 +0000 (0:00:00.607) 0:10:30.379 **** 2026-02-04 02:41:10.639806 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:10.639815 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:10.639824 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:10.639834 | orchestrator | 2026-02-04 02:41:10.639843 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-04 02:41:10.639853 | orchestrator | Wednesday 04 February 2026 02:41:08 +0000 (0:00:00.358) 0:10:30.738 **** 2026-02-04 02:41:10.639862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:41:10.639872 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:41:10.639881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:41:10.639890 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:10.639899 | orchestrator | 2026-02-04 02:41:10.639909 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-04 02:41:10.639918 | orchestrator | Wednesday 04 February 2026 02:41:09 +0000 (0:00:00.901) 0:10:31.640 **** 2026-02-04 02:41:10.639927 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:10.639937 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:10.639953 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:10.639962 | orchestrator | 2026-02-04 02:41:10.639972 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:41:10.639981 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-04 02:41:10.640005 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-04 02:41:10.640014 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-04 02:41:10.640024 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-04 02:41:10.640034 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-04 02:41:10.640058 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-04 02:41:10.640067 | orchestrator | 2026-02-04 02:41:10.640075 | orchestrator | 2026-02-04 02:41:10.640083 | orchestrator | 2026-02-04 02:41:10.640091 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:41:10.640099 | orchestrator | Wednesday 04 February 2026 02:41:10 +0000 (0:00:00.245) 0:10:31.885 **** 2026-02-04 02:41:10.640107 | orchestrator | =============================================================================== 2026-02-04 02:41:10.640114 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 55.79s 2026-02-04 02:41:10.640123 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 40.95s 2026-02-04 02:41:10.640130 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 29.08s 2026-02-04 02:41:10.640138 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.21s 2026-02-04 02:41:10.640146 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.72s 2026-02-04 02:41:10.640154 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.32s 2026-02-04 02:41:10.640162 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.49s 2026-02-04 02:41:10.640169 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.91s 2026-02-04 02:41:10.640177 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.32s 2026-02-04 02:41:10.640185 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 6.69s 2026-02-04 02:41:10.640193 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.43s 2026-02-04 02:41:10.640201 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.34s 2026-02-04 02:41:10.640209 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.83s 2026-02-04 02:41:10.640216 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.42s 2026-02-04 02:41:10.640224 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.97s 2026-02-04 02:41:10.640232 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.78s 2026-02-04 02:41:10.640240 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.56s 2026-02-04 02:41:10.640248 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.37s 2026-02-04 02:41:10.640256 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.31s 2026-02-04 02:41:10.640264 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.20s 2026-02-04 02:41:13.059090 | orchestrator | 2026-02-04 02:41:13 | INFO  | Task e3706930-d76e-408e-b8fc-ad1194e6bc46 (ceph-pools) was prepared for execution. 2026-02-04 02:41:13.059257 | orchestrator | 2026-02-04 02:41:13 | INFO  | It takes a moment until task e3706930-d76e-408e-b8fc-ad1194e6bc46 (ceph-pools) has been started and output is visible here. 2026-02-04 02:41:27.240891 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 02:41:27.241018 | orchestrator | 2.16.14 2026-02-04 02:41:27.241042 | orchestrator | 2026-02-04 02:41:27.241057 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-04 02:41:27.241072 | orchestrator | 2026-02-04 02:41:27.241080 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 02:41:27.241088 | orchestrator | Wednesday 04 February 2026 02:41:17 +0000 (0:00:00.628) 0:00:00.628 **** 2026-02-04 02:41:27.241095 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:41:27.241104 | orchestrator | 2026-02-04 02:41:27.241111 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 02:41:27.241119 | orchestrator | Wednesday 04 February 2026 02:41:18 +0000 (0:00:00.655) 0:00:01.283 **** 2026-02-04 02:41:27.241137 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:27.241145 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:27.241152 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:27.241164 | orchestrator | 2026-02-04 02:41:27.241181 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 02:41:27.241200 | orchestrator | Wednesday 04 February 2026 02:41:18 +0000 (0:00:00.632) 0:00:01.916 **** 2026-02-04 02:41:27.241212 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:27.241224 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:27.241236 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:27.241247 | orchestrator | 2026-02-04 02:41:27.241260 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 02:41:27.241273 | orchestrator | Wednesday 04 February 2026 02:41:19 +0000 (0:00:00.302) 0:00:02.219 **** 2026-02-04 02:41:27.241286 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:27.241299 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:27.241312 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:27.241324 | orchestrator | 2026-02-04 02:41:27.241356 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 02:41:27.241370 | orchestrator | Wednesday 04 February 2026 02:41:19 +0000 (0:00:00.824) 0:00:03.044 **** 2026-02-04 02:41:27.241378 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:27.241385 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:27.241393 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:27.241400 | orchestrator | 2026-02-04 02:41:27.241407 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 02:41:27.241416 | orchestrator | Wednesday 04 February 2026 02:41:20 +0000 (0:00:00.307) 0:00:03.352 **** 2026-02-04 02:41:27.241425 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:27.241433 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:27.241441 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:27.241449 | orchestrator | 2026-02-04 02:41:27.241458 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 02:41:27.241467 | orchestrator | Wednesday 04 February 2026 02:41:20 +0000 (0:00:00.312) 0:00:03.664 **** 2026-02-04 02:41:27.241475 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:27.241483 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:27.241492 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:27.241500 | orchestrator | 2026-02-04 02:41:27.241509 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 02:41:27.241518 | orchestrator | Wednesday 04 February 2026 02:41:20 +0000 (0:00:00.326) 0:00:03.991 **** 2026-02-04 02:41:27.241527 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:27.241536 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:27.241544 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:27.241552 | orchestrator | 2026-02-04 02:41:27.241560 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 02:41:27.241587 | orchestrator | Wednesday 04 February 2026 02:41:21 +0000 (0:00:00.535) 0:00:04.527 **** 2026-02-04 02:41:27.241596 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:27.241605 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:27.241612 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:27.241621 | orchestrator | 2026-02-04 02:41:27.241629 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 02:41:27.241637 | orchestrator | Wednesday 04 February 2026 02:41:21 +0000 (0:00:00.318) 0:00:04.846 **** 2026-02-04 02:41:27.241646 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:41:27.241655 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:41:27.241663 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:41:27.241676 | orchestrator | 2026-02-04 02:41:27.241693 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 02:41:27.241732 | orchestrator | Wednesday 04 February 2026 02:41:22 +0000 (0:00:00.701) 0:00:05.548 **** 2026-02-04 02:41:27.241744 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:27.241756 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:27.241768 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:27.241779 | orchestrator | 2026-02-04 02:41:27.241791 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 02:41:27.241803 | orchestrator | Wednesday 04 February 2026 02:41:22 +0000 (0:00:00.462) 0:00:06.010 **** 2026-02-04 02:41:27.241813 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:41:27.241824 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:41:27.241836 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:41:27.241847 | orchestrator | 2026-02-04 02:41:27.241860 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 02:41:27.241872 | orchestrator | Wednesday 04 February 2026 02:41:25 +0000 (0:00:02.217) 0:00:08.228 **** 2026-02-04 02:41:27.241885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 02:41:27.241898 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 02:41:27.241910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 02:41:27.241922 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:27.241934 | orchestrator | 2026-02-04 02:41:27.241967 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 02:41:27.241976 | orchestrator | Wednesday 04 February 2026 02:41:25 +0000 (0:00:00.657) 0:00:08.886 **** 2026-02-04 02:41:27.241985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 02:41:27.241996 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 02:41:27.242004 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 02:41:27.242011 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:27.242070 | orchestrator | 2026-02-04 02:41:27.242078 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 02:41:27.242085 | orchestrator | Wednesday 04 February 2026 02:41:26 +0000 (0:00:01.045) 0:00:09.931 **** 2026-02-04 02:41:27.242103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:27.242130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:27.242144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:27.242156 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:27.242190 | orchestrator | 2026-02-04 02:41:27.242202 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 02:41:27.242214 | orchestrator | Wednesday 04 February 2026 02:41:27 +0000 (0:00:00.168) 0:00:10.100 **** 2026-02-04 02:41:27.242225 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd8f725914c3c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 02:41:23.826013', 'end': '2026-02-04 02:41:23.909693', 'delta': '0:00:00.083680', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d8f725914c3c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-04 02:41:27.242235 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e8207b686900', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 02:41:24.471312', 'end': '2026-02-04 02:41:24.516230', 'delta': '0:00:00.044918', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8207b686900'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-04 02:41:27.242252 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'c48be97cec44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 02:41:24.970553', 'end': '2026-02-04 02:41:25.011911', 'delta': '0:00:00.041358', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c48be97cec44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-04 02:41:34.183180 | orchestrator | 2026-02-04 02:41:34.183259 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 02:41:34.183268 | orchestrator | Wednesday 04 February 2026 02:41:27 +0000 (0:00:00.195) 0:00:10.296 **** 2026-02-04 02:41:34.183290 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:34.183297 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:34.183303 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:34.183308 | orchestrator | 2026-02-04 02:41:34.183314 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 02:41:34.183320 | orchestrator | Wednesday 04 February 2026 02:41:27 +0000 (0:00:00.435) 0:00:10.731 **** 2026-02-04 02:41:34.183326 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-04 02:41:34.183332 | orchestrator | 2026-02-04 02:41:34.183348 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 02:41:34.183354 | orchestrator | Wednesday 04 February 2026 02:41:29 +0000 (0:00:01.681) 0:00:12.413 **** 2026-02-04 02:41:34.183359 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183365 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183370 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183376 | orchestrator | 2026-02-04 02:41:34.183381 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 02:41:34.183387 | orchestrator | Wednesday 04 February 2026 02:41:29 +0000 (0:00:00.321) 0:00:12.735 **** 2026-02-04 02:41:34.183392 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183398 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183403 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183408 | orchestrator | 2026-02-04 02:41:34.183414 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 02:41:34.183419 | orchestrator | Wednesday 04 February 2026 02:41:30 +0000 (0:00:00.887) 0:00:13.623 **** 2026-02-04 02:41:34.183425 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183430 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183435 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183441 | orchestrator | 2026-02-04 02:41:34.183447 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 02:41:34.183453 | orchestrator | Wednesday 04 February 2026 02:41:30 +0000 (0:00:00.307) 0:00:13.930 **** 2026-02-04 02:41:34.183458 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:34.183464 | orchestrator | 2026-02-04 02:41:34.183469 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 02:41:34.183474 | orchestrator | Wednesday 04 February 2026 02:41:30 +0000 (0:00:00.133) 0:00:14.064 **** 2026-02-04 02:41:34.183480 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183485 | orchestrator | 2026-02-04 02:41:34.183491 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 02:41:34.183496 | orchestrator | Wednesday 04 February 2026 02:41:31 +0000 (0:00:00.240) 0:00:14.304 **** 2026-02-04 02:41:34.183502 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183507 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183513 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183518 | orchestrator | 2026-02-04 02:41:34.183524 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 02:41:34.183529 | orchestrator | Wednesday 04 February 2026 02:41:31 +0000 (0:00:00.294) 0:00:14.599 **** 2026-02-04 02:41:34.183534 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183540 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183545 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183551 | orchestrator | 2026-02-04 02:41:34.183556 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 02:41:34.183561 | orchestrator | Wednesday 04 February 2026 02:41:31 +0000 (0:00:00.310) 0:00:14.909 **** 2026-02-04 02:41:34.183567 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183572 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183578 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183583 | orchestrator | 2026-02-04 02:41:34.183589 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 02:41:34.183594 | orchestrator | Wednesday 04 February 2026 02:41:32 +0000 (0:00:00.561) 0:00:15.470 **** 2026-02-04 02:41:34.183604 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183610 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183615 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183621 | orchestrator | 2026-02-04 02:41:34.183626 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 02:41:34.183632 | orchestrator | Wednesday 04 February 2026 02:41:32 +0000 (0:00:00.328) 0:00:15.799 **** 2026-02-04 02:41:34.183637 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183643 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183648 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183654 | orchestrator | 2026-02-04 02:41:34.183659 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 02:41:34.183664 | orchestrator | Wednesday 04 February 2026 02:41:33 +0000 (0:00:00.323) 0:00:16.123 **** 2026-02-04 02:41:34.183670 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183675 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183681 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183686 | orchestrator | 2026-02-04 02:41:34.183691 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 02:41:34.183697 | orchestrator | Wednesday 04 February 2026 02:41:33 +0000 (0:00:00.570) 0:00:16.693 **** 2026-02-04 02:41:34.183703 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.183752 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.183759 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.183765 | orchestrator | 2026-02-04 02:41:34.183771 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 02:41:34.183778 | orchestrator | Wednesday 04 February 2026 02:41:33 +0000 (0:00:00.337) 0:00:17.031 **** 2026-02-04 02:41:34.183799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f', 'dm-uuid-LVM-8XaWcwBldrFACyhn8O8pDrkh8WYfwfMh8YdRgn42SXPKkSSmdqnloX2coya2uTEh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183819 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e', 'dm-uuid-LVM-BggcAryejjvGBF4uvp6BcYG8cW5k2lInqXUvcrL0euXIKDnaXO5lD17ef9ulmfzT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183870 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.183916 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.304406 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c', 'dm-uuid-LVM-jabOFLmF8RS1U4YRftNuTtdThdIFxea35ctI13zu0z0FRbKQORFQtA0W3pu2nuf0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.304511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.304552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843', 'dm-uuid-LVM-GuQppvMqMgPM92HHdmch1RUlEtgMK7bAQGkZWEBmxgWBBqnmby4j6kn1XrU8W6rj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.304585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LUqg5q-XQXl-4J84-Fu4r-xNUp-Z07d-jQvh8Z', 'scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388', 'scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.304606 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PkP1x1-WFQe-TRGf-2R1c-oEQv-Qw43-IKwaXF', 'scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40', 'scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.304618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.304641 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811', 'scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.304653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.304666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-19-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.304678 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.304692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.304704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.304776 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.433098 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.433200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.433217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.433260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.433280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af', 'dm-uuid-LVM-jfhjIQs9I12AbVZ4uHpbas8Q8DuoJ56eVvgnpRveGHUC1VWvw0UeAndBY1g45KfH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.433320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lVamx9-eYv9-88F9-1eWN-Mo2X-ZvoC-DQM8Qk', 'scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536', 'scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.433334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639', 'dm-uuid-LVM-vz2cv2RninoOpnjrAP98IcdUAgz3XBEESK6kemILvNkP1xNIipyazKS9tR60DcmG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.433355 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Bwhrb-Xrjl-JUvU-1GoK-f7aN-SV93-uYzfRx', 'scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd', 'scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.433368 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.433381 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23', 'scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.433395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.433407 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:34.433429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.633504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.633608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.633647 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.633660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.633671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.633683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-04 02:41:34.633782 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.633811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Zb3vde-Jb13-PnWs-XBLv-pqCq-xraX-sEUQHY', 'scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675', 'scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.633825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2LO7pB-3JRT-gNDG-CXHX-CXgP-r5lI-kGILdq', 'scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52', 'scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.633837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b', 'scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.633850 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-04 02:41:34.633863 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:34.633876 | orchestrator | 2026-02-04 02:41:34.633888 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 02:41:34.633901 | orchestrator | Wednesday 04 February 2026 02:41:34 +0000 (0:00:00.567) 0:00:17.598 **** 2026-02-04 02:41:34.633922 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f', 'dm-uuid-LVM-8XaWcwBldrFACyhn8O8pDrkh8WYfwfMh8YdRgn42SXPKkSSmdqnloX2coya2uTEh'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.781865 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e', 'dm-uuid-LVM-BggcAryejjvGBF4uvp6BcYG8cW5k2lInqXUvcrL0euXIKDnaXO5lD17ef9ulmfzT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.781962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.781979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.781991 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.782003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.782015 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.782130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.782145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.782156 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.782171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.782235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-LUqg5q-XQXl-4J84-Fu4r-xNUp-Z07d-jQvh8Z', 'scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388', 'scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924189 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-PkP1x1-WFQe-TRGf-2R1c-oEQv-Qw43-IKwaXF', 'scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40', 'scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c', 'dm-uuid-LVM-jabOFLmF8RS1U4YRftNuTtdThdIFxea35ctI13zu0z0FRbKQORFQtA0W3pu2nuf0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811', 'scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924317 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843', 'dm-uuid-LVM-GuQppvMqMgPM92HHdmch1RUlEtgMK7bAQGkZWEBmxgWBBqnmby4j6kn1XrU8W6rj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-19-59-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924403 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924417 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924430 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924443 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:34.924457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924470 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924494 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:34.924514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034125 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034277 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af', 'dm-uuid-LVM-jfhjIQs9I12AbVZ4uHpbas8Q8DuoJ56eVvgnpRveGHUC1VWvw0UeAndBY1g45KfH'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lVamx9-eYv9-88F9-1eWN-Mo2X-ZvoC-DQM8Qk', 'scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536', 'scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034324 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639', 'dm-uuid-LVM-vz2cv2RninoOpnjrAP98IcdUAgz3XBEESK6kemILvNkP1xNIipyazKS9tR60DcmG'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034336 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1Bwhrb-Xrjl-JUvU-1GoK-f7aN-SV93-uYzfRx', 'scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd', 'scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034355 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034372 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23', 'scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.034394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.179939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-06-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180020 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180030 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:35.180039 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180068 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180096 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180118 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180127 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Zb3vde-Jb13-PnWs-XBLv-pqCq-xraX-sEUQHY', 'scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675', 'scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:35.180196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2LO7pB-3JRT-gNDG-CXHX-CXgP-r5lI-kGILdq', 'scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52', 'scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:45.331087 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b', 'scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:45.331176 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-04-01-20-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-04 02:41:45.331202 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:45.331210 | orchestrator | 2026-02-04 02:41:45.331217 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 02:41:45.331224 | orchestrator | Wednesday 04 February 2026 02:41:35 +0000 (0:00:00.641) 0:00:18.239 **** 2026-02-04 02:41:45.331230 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:45.331236 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:45.331241 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:45.331247 | orchestrator | 2026-02-04 02:41:45.331252 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 02:41:45.331258 | orchestrator | Wednesday 04 February 2026 02:41:36 +0000 (0:00:00.936) 0:00:19.176 **** 2026-02-04 02:41:45.331264 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:45.331269 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:45.331275 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:45.331280 | orchestrator | 2026-02-04 02:41:45.331286 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 02:41:45.331291 | orchestrator | Wednesday 04 February 2026 02:41:36 +0000 (0:00:00.334) 0:00:19.511 **** 2026-02-04 02:41:45.331297 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:45.331303 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:45.331308 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:45.331314 | orchestrator | 2026-02-04 02:41:45.331341 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 02:41:45.331348 | orchestrator | Wednesday 04 February 2026 02:41:37 +0000 (0:00:00.643) 0:00:20.155 **** 2026-02-04 02:41:45.331361 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331366 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:45.331372 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:45.331377 | orchestrator | 2026-02-04 02:41:45.331383 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 02:41:45.331388 | orchestrator | Wednesday 04 February 2026 02:41:37 +0000 (0:00:00.344) 0:00:20.499 **** 2026-02-04 02:41:45.331394 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331399 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:45.331405 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:45.331410 | orchestrator | 2026-02-04 02:41:45.331416 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 02:41:45.331421 | orchestrator | Wednesday 04 February 2026 02:41:38 +0000 (0:00:00.719) 0:00:21.219 **** 2026-02-04 02:41:45.331426 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331432 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:45.331437 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:45.331443 | orchestrator | 2026-02-04 02:41:45.331448 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 02:41:45.331454 | orchestrator | Wednesday 04 February 2026 02:41:38 +0000 (0:00:00.339) 0:00:21.559 **** 2026-02-04 02:41:45.331459 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-04 02:41:45.331465 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-04 02:41:45.331471 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-04 02:41:45.331476 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-04 02:41:45.331482 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-04 02:41:45.331487 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-04 02:41:45.331492 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-04 02:41:45.331504 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-04 02:41:45.331509 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-04 02:41:45.331515 | orchestrator | 2026-02-04 02:41:45.331521 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 02:41:45.331526 | orchestrator | Wednesday 04 February 2026 02:41:39 +0000 (0:00:01.045) 0:00:22.604 **** 2026-02-04 02:41:45.331542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 02:41:45.331549 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 02:41:45.331554 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 02:41:45.331560 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331565 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 02:41:45.331571 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 02:41:45.331576 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 02:41:45.331581 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:45.331587 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 02:41:45.331592 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 02:41:45.331598 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 02:41:45.331603 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:45.331609 | orchestrator | 2026-02-04 02:41:45.331614 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 02:41:45.331620 | orchestrator | Wednesday 04 February 2026 02:41:39 +0000 (0:00:00.377) 0:00:22.982 **** 2026-02-04 02:41:45.331626 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:41:45.331632 | orchestrator | 2026-02-04 02:41:45.331637 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 02:41:45.331644 | orchestrator | Wednesday 04 February 2026 02:41:40 +0000 (0:00:00.749) 0:00:23.731 **** 2026-02-04 02:41:45.331651 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331657 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:45.331663 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:45.331670 | orchestrator | 2026-02-04 02:41:45.331676 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 02:41:45.331682 | orchestrator | Wednesday 04 February 2026 02:41:40 +0000 (0:00:00.314) 0:00:24.045 **** 2026-02-04 02:41:45.331688 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331694 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:45.331700 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:45.331707 | orchestrator | 2026-02-04 02:41:45.331714 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 02:41:45.331733 | orchestrator | Wednesday 04 February 2026 02:41:41 +0000 (0:00:00.317) 0:00:24.363 **** 2026-02-04 02:41:45.331739 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331746 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:41:45.331752 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:41:45.331758 | orchestrator | 2026-02-04 02:41:45.331765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 02:41:45.331771 | orchestrator | Wednesday 04 February 2026 02:41:41 +0000 (0:00:00.508) 0:00:24.871 **** 2026-02-04 02:41:45.331778 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:45.331784 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:45.331790 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:45.331797 | orchestrator | 2026-02-04 02:41:45.331803 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 02:41:45.331810 | orchestrator | Wednesday 04 February 2026 02:41:42 +0000 (0:00:00.419) 0:00:25.291 **** 2026-02-04 02:41:45.331816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:41:45.331827 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:41:45.331837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:41:45.331843 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331849 | orchestrator | 2026-02-04 02:41:45.331856 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 02:41:45.331862 | orchestrator | Wednesday 04 February 2026 02:41:42 +0000 (0:00:00.382) 0:00:25.674 **** 2026-02-04 02:41:45.331868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:41:45.331875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:41:45.331881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:41:45.331888 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331894 | orchestrator | 2026-02-04 02:41:45.331900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 02:41:45.331906 | orchestrator | Wednesday 04 February 2026 02:41:43 +0000 (0:00:00.396) 0:00:26.070 **** 2026-02-04 02:41:45.331913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 02:41:45.331919 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 02:41:45.331925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 02:41:45.331932 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:41:45.331938 | orchestrator | 2026-02-04 02:41:45.331945 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 02:41:45.331951 | orchestrator | Wednesday 04 February 2026 02:41:43 +0000 (0:00:00.377) 0:00:26.448 **** 2026-02-04 02:41:45.331957 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:41:45.331963 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:41:45.331970 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:41:45.331976 | orchestrator | 2026-02-04 02:41:45.331982 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 02:41:45.331989 | orchestrator | Wednesday 04 February 2026 02:41:43 +0000 (0:00:00.317) 0:00:26.765 **** 2026-02-04 02:41:45.331995 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 02:41:45.332001 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 02:41:45.332007 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 02:41:45.332012 | orchestrator | 2026-02-04 02:41:45.332018 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 02:41:45.332023 | orchestrator | Wednesday 04 February 2026 02:41:44 +0000 (0:00:00.774) 0:00:27.540 **** 2026-02-04 02:41:45.332029 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:41:45.332038 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:43:21.891667 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:43:21.891787 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 02:43:21.891807 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 02:43:21.891867 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 02:43:21.891883 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 02:43:21.891894 | orchestrator | 2026-02-04 02:43:21.891906 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 02:43:21.891918 | orchestrator | Wednesday 04 February 2026 02:41:45 +0000 (0:00:00.851) 0:00:28.391 **** 2026-02-04 02:43:21.891929 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-04 02:43:21.891941 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 02:43:21.891952 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 02:43:21.891962 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-04 02:43:21.891999 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 02:43:21.892011 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 02:43:21.892021 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 02:43:21.892032 | orchestrator | 2026-02-04 02:43:21.892043 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-04 02:43:21.892053 | orchestrator | Wednesday 04 February 2026 02:41:46 +0000 (0:00:01.673) 0:00:30.064 **** 2026-02-04 02:43:21.892064 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:43:21.892076 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:43:21.892087 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-04 02:43:21.892098 | orchestrator | 2026-02-04 02:43:21.892109 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-04 02:43:21.892120 | orchestrator | Wednesday 04 February 2026 02:41:47 +0000 (0:00:00.394) 0:00:30.459 **** 2026-02-04 02:43:21.892132 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 02:43:21.892147 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 02:43:21.892172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 02:43:21.892187 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 02:43:21.892200 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-04 02:43:21.892213 | orchestrator | 2026-02-04 02:43:21.892226 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-04 02:43:21.892239 | orchestrator | Wednesday 04 February 2026 02:42:30 +0000 (0:00:43.064) 0:01:13.523 **** 2026-02-04 02:43:21.892251 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892264 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892288 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892301 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892316 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892329 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-04 02:43:21.892342 | orchestrator | 2026-02-04 02:43:21.892355 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-04 02:43:21.892368 | orchestrator | Wednesday 04 February 2026 02:42:53 +0000 (0:00:23.139) 0:01:36.663 **** 2026-02-04 02:43:21.892397 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892420 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892433 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892446 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892458 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892470 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892483 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-04 02:43:21.892496 | orchestrator | 2026-02-04 02:43:21.892509 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-04 02:43:21.892523 | orchestrator | Wednesday 04 February 2026 02:43:04 +0000 (0:00:11.153) 0:01:47.816 **** 2026-02-04 02:43:21.892536 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892549 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 02:43:21.892561 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 02:43:21.892572 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892583 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 02:43:21.892594 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 02:43:21.892605 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892616 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 02:43:21.892626 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 02:43:21.892637 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892648 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 02:43:21.892659 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 02:43:21.892669 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892680 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 02:43:21.892691 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 02:43:21.892702 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-04 02:43:21.892712 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 02:43:21.892723 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 02:43:21.892734 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-04 02:43:21.892745 | orchestrator | 2026-02-04 02:43:21.892756 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:43:21.892773 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-04 02:43:21.892786 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-04 02:43:21.892798 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-04 02:43:21.892808 | orchestrator | 2026-02-04 02:43:21.892855 | orchestrator | 2026-02-04 02:43:21.892868 | orchestrator | 2026-02-04 02:43:21.892879 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:43:21.892890 | orchestrator | Wednesday 04 February 2026 02:43:21 +0000 (0:00:17.109) 0:02:04.926 **** 2026-02-04 02:43:21.892901 | orchestrator | =============================================================================== 2026-02-04 02:43:21.892919 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.06s 2026-02-04 02:43:21.892930 | orchestrator | generate keys ---------------------------------------------------------- 23.14s 2026-02-04 02:43:21.892940 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.11s 2026-02-04 02:43:21.892951 | orchestrator | get keys from monitors ------------------------------------------------- 11.15s 2026-02-04 02:43:21.892962 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.22s 2026-02-04 02:43:21.892973 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.68s 2026-02-04 02:43:21.892984 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.67s 2026-02-04 02:43:21.892995 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.05s 2026-02-04 02:43:21.893005 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.05s 2026-02-04 02:43:21.893016 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.94s 2026-02-04 02:43:21.893027 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.89s 2026-02-04 02:43:21.893038 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.85s 2026-02-04 02:43:21.893049 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.82s 2026-02-04 02:43:21.893067 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.77s 2026-02-04 02:43:22.278660 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.75s 2026-02-04 02:43:22.278762 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.72s 2026-02-04 02:43:22.278777 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2026-02-04 02:43:22.278789 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.66s 2026-02-04 02:43:22.278800 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.66s 2026-02-04 02:43:22.278812 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-02-04 02:43:24.647315 | orchestrator | 2026-02-04 02:43:24 | INFO  | Task 4a0a172c-c356-461f-8f2b-253d75bed6be (copy-ceph-keys) was prepared for execution. 2026-02-04 02:43:24.647452 | orchestrator | 2026-02-04 02:43:24 | INFO  | It takes a moment until task 4a0a172c-c356-461f-8f2b-253d75bed6be (copy-ceph-keys) has been started and output is visible here. 2026-02-04 02:44:03.414009 | orchestrator | 2026-02-04 02:44:03.414194 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-04 02:44:03.414213 | orchestrator | 2026-02-04 02:44:03.446965 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-04 02:44:03.447045 | orchestrator | Wednesday 04 February 2026 02:43:28 +0000 (0:00:00.161) 0:00:00.161 **** 2026-02-04 02:44:03.447059 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-04 02:44:03.447072 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447083 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447095 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 02:44:03.447106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447118 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-04 02:44:03.447129 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-04 02:44:03.447140 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-04 02:44:03.447182 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-04 02:44:03.447194 | orchestrator | 2026-02-04 02:44:03.447206 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-04 02:44:03.447217 | orchestrator | Wednesday 04 February 2026 02:43:33 +0000 (0:00:04.506) 0:00:04.668 **** 2026-02-04 02:44:03.447228 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-04 02:44:03.447254 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447265 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447277 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 02:44:03.447288 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447299 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-04 02:44:03.447310 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-04 02:44:03.447321 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-04 02:44:03.447332 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-04 02:44:03.447343 | orchestrator | 2026-02-04 02:44:03.447355 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-04 02:44:03.447366 | orchestrator | Wednesday 04 February 2026 02:43:37 +0000 (0:00:04.180) 0:00:08.849 **** 2026-02-04 02:44:03.447378 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 02:44:03.447389 | orchestrator | 2026-02-04 02:44:03.447401 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-04 02:44:03.447412 | orchestrator | Wednesday 04 February 2026 02:43:38 +0000 (0:00:01.017) 0:00:09.866 **** 2026-02-04 02:44:03.447423 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-04 02:44:03.447435 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447446 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447458 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 02:44:03.447469 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447479 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-04 02:44:03.447490 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-04 02:44:03.447501 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-04 02:44:03.447512 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-04 02:44:03.447523 | orchestrator | 2026-02-04 02:44:03.447534 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-04 02:44:03.447545 | orchestrator | Wednesday 04 February 2026 02:43:51 +0000 (0:00:13.419) 0:00:23.285 **** 2026-02-04 02:44:03.447556 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-04 02:44:03.447567 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-04 02:44:03.447578 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-04 02:44:03.447589 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-04 02:44:03.447643 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-04 02:44:03.447676 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-04 02:44:03.447688 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-04 02:44:03.447699 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-04 02:44:03.447710 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-04 02:44:03.447721 | orchestrator | 2026-02-04 02:44:03.447732 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-04 02:44:03.447743 | orchestrator | Wednesday 04 February 2026 02:43:56 +0000 (0:00:04.160) 0:00:27.446 **** 2026-02-04 02:44:03.447754 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-04 02:44:03.447766 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447776 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447787 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 02:44:03.447798 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-04 02:44:03.447809 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-04 02:44:03.447820 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-04 02:44:03.447831 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-04 02:44:03.447842 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-04 02:44:03.447852 | orchestrator | 2026-02-04 02:44:03.447897 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:44:03.447917 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:44:03.447930 | orchestrator | 2026-02-04 02:44:03.447941 | orchestrator | 2026-02-04 02:44:03.447952 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:44:03.447963 | orchestrator | Wednesday 04 February 2026 02:44:03 +0000 (0:00:06.948) 0:00:34.395 **** 2026-02-04 02:44:03.447974 | orchestrator | =============================================================================== 2026-02-04 02:44:03.447985 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.42s 2026-02-04 02:44:03.447996 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.95s 2026-02-04 02:44:03.448007 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.51s 2026-02-04 02:44:03.448018 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.18s 2026-02-04 02:44:03.448028 | orchestrator | Check if target directories exist --------------------------------------- 4.16s 2026-02-04 02:44:03.448039 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2026-02-04 02:44:15.787296 | orchestrator | 2026-02-04 02:44:15 | INFO  | Task f4a1a51c-b9a0-468a-b85a-5c8a52a5d723 (cephclient) was prepared for execution. 2026-02-04 02:44:15.787406 | orchestrator | 2026-02-04 02:44:15 | INFO  | It takes a moment until task f4a1a51c-b9a0-468a-b85a-5c8a52a5d723 (cephclient) has been started and output is visible here. 2026-02-04 02:45:17.519937 | orchestrator | 2026-02-04 02:45:17.520065 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-04 02:45:17.520083 | orchestrator | 2026-02-04 02:45:17.520095 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-04 02:45:17.520107 | orchestrator | Wednesday 04 February 2026 02:44:20 +0000 (0:00:00.235) 0:00:00.235 **** 2026-02-04 02:45:17.520118 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-04 02:45:17.520157 | orchestrator | 2026-02-04 02:45:17.520169 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-04 02:45:17.520180 | orchestrator | Wednesday 04 February 2026 02:44:20 +0000 (0:00:00.234) 0:00:00.470 **** 2026-02-04 02:45:17.520191 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-04 02:45:17.520202 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-04 02:45:17.520217 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-04 02:45:17.520237 | orchestrator | 2026-02-04 02:45:17.520249 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-04 02:45:17.520260 | orchestrator | Wednesday 04 February 2026 02:44:21 +0000 (0:00:01.298) 0:00:01.768 **** 2026-02-04 02:45:17.520271 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-04 02:45:17.520282 | orchestrator | 2026-02-04 02:45:17.520293 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-04 02:45:17.520304 | orchestrator | Wednesday 04 February 2026 02:44:23 +0000 (0:00:01.461) 0:00:03.230 **** 2026-02-04 02:45:17.520315 | orchestrator | changed: [testbed-manager] 2026-02-04 02:45:17.520326 | orchestrator | 2026-02-04 02:45:17.520337 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-04 02:45:17.520348 | orchestrator | Wednesday 04 February 2026 02:44:24 +0000 (0:00:00.951) 0:00:04.182 **** 2026-02-04 02:45:17.520359 | orchestrator | changed: [testbed-manager] 2026-02-04 02:45:17.520370 | orchestrator | 2026-02-04 02:45:17.520381 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-04 02:45:17.520391 | orchestrator | Wednesday 04 February 2026 02:44:24 +0000 (0:00:00.915) 0:00:05.098 **** 2026-02-04 02:45:17.520402 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-04 02:45:17.520413 | orchestrator | ok: [testbed-manager] 2026-02-04 02:45:17.520424 | orchestrator | 2026-02-04 02:45:17.520435 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-04 02:45:17.520448 | orchestrator | Wednesday 04 February 2026 02:45:07 +0000 (0:00:42.484) 0:00:47.582 **** 2026-02-04 02:45:17.520461 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-04 02:45:17.520475 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-04 02:45:17.520488 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-04 02:45:17.520501 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-04 02:45:17.520513 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-04 02:45:17.520527 | orchestrator | 2026-02-04 02:45:17.520539 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-04 02:45:17.520552 | orchestrator | Wednesday 04 February 2026 02:45:11 +0000 (0:00:04.209) 0:00:51.792 **** 2026-02-04 02:45:17.520565 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-04 02:45:17.520578 | orchestrator | 2026-02-04 02:45:17.520591 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-04 02:45:17.520603 | orchestrator | Wednesday 04 February 2026 02:45:12 +0000 (0:00:00.471) 0:00:52.263 **** 2026-02-04 02:45:17.520615 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:45:17.520628 | orchestrator | 2026-02-04 02:45:17.520641 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-04 02:45:17.520654 | orchestrator | Wednesday 04 February 2026 02:45:12 +0000 (0:00:00.142) 0:00:52.406 **** 2026-02-04 02:45:17.520666 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:45:17.520679 | orchestrator | 2026-02-04 02:45:17.520691 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-04 02:45:17.520704 | orchestrator | Wednesday 04 February 2026 02:45:12 +0000 (0:00:00.582) 0:00:52.988 **** 2026-02-04 02:45:17.520758 | orchestrator | changed: [testbed-manager] 2026-02-04 02:45:17.520773 | orchestrator | 2026-02-04 02:45:17.520787 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-04 02:45:17.520812 | orchestrator | Wednesday 04 February 2026 02:45:14 +0000 (0:00:01.484) 0:00:54.473 **** 2026-02-04 02:45:17.520824 | orchestrator | changed: [testbed-manager] 2026-02-04 02:45:17.520834 | orchestrator | 2026-02-04 02:45:17.520845 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-04 02:45:17.520856 | orchestrator | Wednesday 04 February 2026 02:45:14 +0000 (0:00:00.685) 0:00:55.159 **** 2026-02-04 02:45:17.520866 | orchestrator | changed: [testbed-manager] 2026-02-04 02:45:17.520877 | orchestrator | 2026-02-04 02:45:17.520888 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-04 02:45:17.520899 | orchestrator | Wednesday 04 February 2026 02:45:15 +0000 (0:00:00.616) 0:00:55.775 **** 2026-02-04 02:45:17.520910 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-04 02:45:17.520921 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-04 02:45:17.520931 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-04 02:45:17.520942 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-04 02:45:17.520953 | orchestrator | 2026-02-04 02:45:17.520964 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:45:17.520976 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 02:45:17.520987 | orchestrator | 2026-02-04 02:45:17.520998 | orchestrator | 2026-02-04 02:45:17.521026 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:45:17.521038 | orchestrator | Wednesday 04 February 2026 02:45:17 +0000 (0:00:01.526) 0:00:57.302 **** 2026-02-04 02:45:17.521049 | orchestrator | =============================================================================== 2026-02-04 02:45:17.521059 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 42.48s 2026-02-04 02:45:17.521070 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.21s 2026-02-04 02:45:17.521081 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.53s 2026-02-04 02:45:17.521091 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.48s 2026-02-04 02:45:17.521102 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.46s 2026-02-04 02:45:17.521113 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.30s 2026-02-04 02:45:17.521123 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2026-02-04 02:45:17.521134 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.92s 2026-02-04 02:45:17.521145 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.69s 2026-02-04 02:45:17.521155 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2026-02-04 02:45:17.521166 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.58s 2026-02-04 02:45:17.521176 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2026-02-04 02:45:17.521187 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-02-04 02:45:17.521198 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-02-04 02:45:19.922908 | orchestrator | 2026-02-04 02:45:19 | INFO  | Task 07146115-5bdd-429f-ba67-36e1673a6201 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-04 02:45:19.922992 | orchestrator | 2026-02-04 02:45:19 | INFO  | It takes a moment until task 07146115-5bdd-429f-ba67-36e1673a6201 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-04 02:46:36.260738 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 02:46:36.260852 | orchestrator | 2.16.14 2026-02-04 02:46:36.260871 | orchestrator | 2026-02-04 02:46:36.260885 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-04 02:46:36.260897 | orchestrator | 2026-02-04 02:46:36.260908 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-04 02:46:36.260943 | orchestrator | Wednesday 04 February 2026 02:45:24 +0000 (0:00:00.286) 0:00:00.286 **** 2026-02-04 02:46:36.260955 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.260967 | orchestrator | 2026-02-04 02:46:36.260978 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-04 02:46:36.260989 | orchestrator | Wednesday 04 February 2026 02:45:26 +0000 (0:00:02.214) 0:00:02.500 **** 2026-02-04 02:46:36.261000 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.261011 | orchestrator | 2026-02-04 02:46:36.261021 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-04 02:46:36.261032 | orchestrator | Wednesday 04 February 2026 02:45:27 +0000 (0:00:01.115) 0:00:03.616 **** 2026-02-04 02:46:36.261043 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.261053 | orchestrator | 2026-02-04 02:46:36.261064 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-04 02:46:36.261075 | orchestrator | Wednesday 04 February 2026 02:45:28 +0000 (0:00:01.068) 0:00:04.684 **** 2026-02-04 02:46:36.261086 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.261096 | orchestrator | 2026-02-04 02:46:36.261107 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-04 02:46:36.261118 | orchestrator | Wednesday 04 February 2026 02:45:29 +0000 (0:00:01.158) 0:00:05.843 **** 2026-02-04 02:46:36.261129 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.261140 | orchestrator | 2026-02-04 02:46:36.261150 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-04 02:46:36.261161 | orchestrator | Wednesday 04 February 2026 02:45:30 +0000 (0:00:01.064) 0:00:06.907 **** 2026-02-04 02:46:36.261185 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.261197 | orchestrator | 2026-02-04 02:46:36.261208 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-04 02:46:36.261219 | orchestrator | Wednesday 04 February 2026 02:45:31 +0000 (0:00:01.041) 0:00:07.949 **** 2026-02-04 02:46:36.261230 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.261243 | orchestrator | 2026-02-04 02:46:36.261255 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-04 02:46:36.261268 | orchestrator | Wednesday 04 February 2026 02:45:34 +0000 (0:00:02.087) 0:00:10.036 **** 2026-02-04 02:46:36.261281 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.261293 | orchestrator | 2026-02-04 02:46:36.261305 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-04 02:46:36.261317 | orchestrator | Wednesday 04 February 2026 02:45:35 +0000 (0:00:01.232) 0:00:11.268 **** 2026-02-04 02:46:36.261329 | orchestrator | changed: [testbed-manager] 2026-02-04 02:46:36.261341 | orchestrator | 2026-02-04 02:46:36.261353 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-04 02:46:36.261366 | orchestrator | Wednesday 04 February 2026 02:46:11 +0000 (0:00:36.084) 0:00:47.353 **** 2026-02-04 02:46:36.261378 | orchestrator | skipping: [testbed-manager] 2026-02-04 02:46:36.261390 | orchestrator | 2026-02-04 02:46:36.261403 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 02:46:36.261416 | orchestrator | 2026-02-04 02:46:36.261430 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 02:46:36.261442 | orchestrator | Wednesday 04 February 2026 02:46:11 +0000 (0:00:00.164) 0:00:47.518 **** 2026-02-04 02:46:36.261472 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:46:36.261485 | orchestrator | 2026-02-04 02:46:36.261498 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 02:46:36.261510 | orchestrator | 2026-02-04 02:46:36.261522 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 02:46:36.261535 | orchestrator | Wednesday 04 February 2026 02:46:23 +0000 (0:00:11.779) 0:00:59.297 **** 2026-02-04 02:46:36.261547 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:46:36.261560 | orchestrator | 2026-02-04 02:46:36.261573 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-04 02:46:36.261593 | orchestrator | 2026-02-04 02:46:36.261632 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-04 02:46:36.261645 | orchestrator | Wednesday 04 February 2026 02:46:24 +0000 (0:00:01.218) 0:01:00.515 **** 2026-02-04 02:46:36.261656 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:46:36.261667 | orchestrator | 2026-02-04 02:46:36.261677 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:46:36.261689 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 02:46:36.261701 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:46:36.261712 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:46:36.261723 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 02:46:36.261733 | orchestrator | 2026-02-04 02:46:36.261744 | orchestrator | 2026-02-04 02:46:36.261755 | orchestrator | 2026-02-04 02:46:36.261766 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:46:36.261777 | orchestrator | Wednesday 04 February 2026 02:46:35 +0000 (0:00:11.286) 0:01:11.803 **** 2026-02-04 02:46:36.261787 | orchestrator | =============================================================================== 2026-02-04 02:46:36.261798 | orchestrator | Create admin user ------------------------------------------------------ 36.08s 2026-02-04 02:46:36.261824 | orchestrator | Restart ceph manager service ------------------------------------------- 24.28s 2026-02-04 02:46:36.261836 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.21s 2026-02-04 02:46:36.261846 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.09s 2026-02-04 02:46:36.261857 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.23s 2026-02-04 02:46:36.261868 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.16s 2026-02-04 02:46:36.261883 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.12s 2026-02-04 02:46:36.261901 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.07s 2026-02-04 02:46:36.261912 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.06s 2026-02-04 02:46:36.261922 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2026-02-04 02:46:36.261933 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2026-02-04 02:46:36.600779 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-04 02:46:38.653834 | orchestrator | 2026-02-04 02:46:38 | INFO  | Task e31952a7-3522-4fb7-8e0c-e98b200d7060 (keystone) was prepared for execution. 2026-02-04 02:46:38.653952 | orchestrator | 2026-02-04 02:46:38 | INFO  | It takes a moment until task e31952a7-3522-4fb7-8e0c-e98b200d7060 (keystone) has been started and output is visible here. 2026-02-04 02:46:45.857298 | orchestrator | 2026-02-04 02:46:45.857407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:46:45.857423 | orchestrator | 2026-02-04 02:46:45.857436 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:46:45.857486 | orchestrator | Wednesday 04 February 2026 02:46:42 +0000 (0:00:00.258) 0:00:00.258 **** 2026-02-04 02:46:45.857499 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:46:45.857513 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:46:45.857524 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:46:45.857536 | orchestrator | 2026-02-04 02:46:45.857548 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:46:45.857560 | orchestrator | Wednesday 04 February 2026 02:46:43 +0000 (0:00:00.336) 0:00:00.595 **** 2026-02-04 02:46:45.857651 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-04 02:46:45.857666 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-04 02:46:45.857679 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-04 02:46:45.857692 | orchestrator | 2026-02-04 02:46:45.857704 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-04 02:46:45.857715 | orchestrator | 2026-02-04 02:46:45.857727 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 02:46:45.857738 | orchestrator | Wednesday 04 February 2026 02:46:43 +0000 (0:00:00.439) 0:00:01.034 **** 2026-02-04 02:46:45.857749 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:46:45.857762 | orchestrator | 2026-02-04 02:46:45.857775 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-04 02:46:45.857787 | orchestrator | Wednesday 04 February 2026 02:46:44 +0000 (0:00:00.561) 0:00:01.596 **** 2026-02-04 02:46:45.857804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:45.857822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:45.857863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:45.857907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:46:45.857922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:46:45.857934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:46:45.857946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:46:45.857958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:46:45.857971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:46:45.857991 | orchestrator | 2026-02-04 02:46:45.858004 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-04 02:46:45.858081 | orchestrator | Wednesday 04 February 2026 02:46:45 +0000 (0:00:01.658) 0:00:03.254 **** 2026-02-04 02:46:51.452993 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:46:51.453085 | orchestrator | 2026-02-04 02:46:51.453096 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-04 02:46:51.453118 | orchestrator | Wednesday 04 February 2026 02:46:46 +0000 (0:00:00.313) 0:00:03.568 **** 2026-02-04 02:46:51.453126 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:46:51.453133 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:46:51.453140 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:46:51.453147 | orchestrator | 2026-02-04 02:46:51.453154 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-04 02:46:51.453161 | orchestrator | Wednesday 04 February 2026 02:46:46 +0000 (0:00:00.302) 0:00:03.870 **** 2026-02-04 02:46:51.453168 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 02:46:51.453175 | orchestrator | 2026-02-04 02:46:51.453181 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 02:46:51.453188 | orchestrator | Wednesday 04 February 2026 02:46:47 +0000 (0:00:00.818) 0:00:04.688 **** 2026-02-04 02:46:51.453196 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:46:51.453203 | orchestrator | 2026-02-04 02:46:51.453210 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-04 02:46:51.453216 | orchestrator | Wednesday 04 February 2026 02:46:47 +0000 (0:00:00.564) 0:00:05.252 **** 2026-02-04 02:46:51.453227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:51.453237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:51.453246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:51.453291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:46:51.453302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:46:51.453310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:46:51.453317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:46:51.453324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:46:51.453336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:46:51.453343 | orchestrator | 2026-02-04 02:46:51.453350 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-04 02:46:51.453357 | orchestrator | Wednesday 04 February 2026 02:46:50 +0000 (0:00:03.032) 0:00:08.284 **** 2026-02-04 02:46:51.453371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:46:52.227928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:46:52.228080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:46:52.228101 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:46:52.228119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:46:52.228156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:46:52.228174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:46:52.228187 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:46:52.228278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:46:52.228293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:46:52.228305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:46:52.228324 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:46:52.228336 | orchestrator | 2026-02-04 02:46:52.228348 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-04 02:46:52.228361 | orchestrator | Wednesday 04 February 2026 02:46:51 +0000 (0:00:00.572) 0:00:08.857 **** 2026-02-04 02:46:52.228373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:46:52.228391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:46:52.228412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:46:55.377469 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:46:55.377643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:46:55.377682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:46:55.377721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:46:55.377734 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:46:55.377762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:46:55.377775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:46:55.377807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:46:55.377819 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:46:55.377831 | orchestrator | 2026-02-04 02:46:55.377843 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-04 02:46:55.377855 | orchestrator | Wednesday 04 February 2026 02:46:52 +0000 (0:00:00.770) 0:00:09.627 **** 2026-02-04 02:46:55.377867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:55.377887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:55.377906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:46:55.377928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:47:00.082118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:47:00.082208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:47:00.082216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:47:00.082221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:47:00.082232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:47:00.082237 | orchestrator | 2026-02-04 02:47:00.082243 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-04 02:47:00.082249 | orchestrator | Wednesday 04 February 2026 02:46:55 +0000 (0:00:03.147) 0:00:12.775 **** 2026-02-04 02:47:00.082265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:47:00.082271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:47:00.082281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:47:00.082286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:47:00.082294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:47:00.082303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:47:03.567192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:47:03.567331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:47:03.567349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:47:03.567362 | orchestrator | 2026-02-04 02:47:03.567376 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-04 02:47:03.567389 | orchestrator | Wednesday 04 February 2026 02:47:00 +0000 (0:00:04.698) 0:00:17.474 **** 2026-02-04 02:47:03.567401 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:47:03.567413 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:47:03.567424 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:47:03.567434 | orchestrator | 2026-02-04 02:47:03.567446 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-04 02:47:03.567457 | orchestrator | Wednesday 04 February 2026 02:47:01 +0000 (0:00:01.407) 0:00:18.881 **** 2026-02-04 02:47:03.567467 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:47:03.567478 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:47:03.567489 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:47:03.567500 | orchestrator | 2026-02-04 02:47:03.567511 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-04 02:47:03.567522 | orchestrator | Wednesday 04 February 2026 02:47:02 +0000 (0:00:00.614) 0:00:19.496 **** 2026-02-04 02:47:03.567533 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:47:03.567544 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:47:03.567554 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:47:03.567642 | orchestrator | 2026-02-04 02:47:03.567673 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-04 02:47:03.567685 | orchestrator | Wednesday 04 February 2026 02:47:02 +0000 (0:00:00.522) 0:00:20.018 **** 2026-02-04 02:47:03.567696 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:47:03.567707 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:47:03.567718 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:47:03.567732 | orchestrator | 2026-02-04 02:47:03.567746 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-04 02:47:03.567759 | orchestrator | Wednesday 04 February 2026 02:47:02 +0000 (0:00:00.327) 0:00:20.346 **** 2026-02-04 02:47:03.567793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:47:03.567820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:47:03.567835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:47:03.567849 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:47:03.567864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:47:03.567884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:47:03.567898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:47:03.567923 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:47:03.567959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-04 02:47:21.959159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 02:47:21.959298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 02:47:21.959331 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:47:21.959355 | orchestrator | 2026-02-04 02:47:21.959376 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 02:47:21.959394 | orchestrator | Wednesday 04 February 2026 02:47:03 +0000 (0:00:00.616) 0:00:20.963 **** 2026-02-04 02:47:21.959406 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:47:21.959417 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:47:21.959428 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:47:21.959440 | orchestrator | 2026-02-04 02:47:21.959451 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-04 02:47:21.959462 | orchestrator | Wednesday 04 February 2026 02:47:03 +0000 (0:00:00.308) 0:00:21.272 **** 2026-02-04 02:47:21.959473 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 02:47:21.959486 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 02:47:21.959594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-04 02:47:21.959608 | orchestrator | 2026-02-04 02:47:21.959634 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-04 02:47:21.959646 | orchestrator | Wednesday 04 February 2026 02:47:05 +0000 (0:00:01.782) 0:00:23.054 **** 2026-02-04 02:47:21.959657 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 02:47:21.959668 | orchestrator | 2026-02-04 02:47:21.959681 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-04 02:47:21.959693 | orchestrator | Wednesday 04 February 2026 02:47:06 +0000 (0:00:00.891) 0:00:23.946 **** 2026-02-04 02:47:21.959705 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:47:21.959719 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:47:21.959731 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:47:21.959743 | orchestrator | 2026-02-04 02:47:21.959805 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-04 02:47:21.959819 | orchestrator | Wednesday 04 February 2026 02:47:07 +0000 (0:00:00.562) 0:00:24.509 **** 2026-02-04 02:47:21.959832 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 02:47:21.959845 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 02:47:21.959858 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 02:47:21.959869 | orchestrator | 2026-02-04 02:47:21.959880 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-04 02:47:21.959892 | orchestrator | Wednesday 04 February 2026 02:47:08 +0000 (0:00:01.076) 0:00:25.585 **** 2026-02-04 02:47:21.959902 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:47:21.959915 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:47:21.959926 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:47:21.959936 | orchestrator | 2026-02-04 02:47:21.959947 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-04 02:47:21.959958 | orchestrator | Wednesday 04 February 2026 02:47:08 +0000 (0:00:00.509) 0:00:26.095 **** 2026-02-04 02:47:21.959969 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 02:47:21.959981 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 02:47:21.959992 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-04 02:47:21.960003 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 02:47:21.960014 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 02:47:21.960025 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-04 02:47:21.960036 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 02:47:21.960048 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 02:47:21.960080 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-04 02:47:21.960092 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 02:47:21.960103 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 02:47:21.960114 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-04 02:47:21.960125 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 02:47:21.960136 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 02:47:21.960147 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-04 02:47:21.960157 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 02:47:21.960178 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 02:47:21.960189 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 02:47:21.960200 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 02:47:21.960211 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 02:47:21.960221 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 02:47:21.960232 | orchestrator | 2026-02-04 02:47:21.960243 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-04 02:47:21.960254 | orchestrator | Wednesday 04 February 2026 02:47:17 +0000 (0:00:08.491) 0:00:34.586 **** 2026-02-04 02:47:21.960264 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 02:47:21.960275 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 02:47:21.960286 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 02:47:21.960297 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 02:47:21.960308 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 02:47:21.960318 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 02:47:21.960329 | orchestrator | 2026-02-04 02:47:21.960340 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-04 02:47:21.960356 | orchestrator | Wednesday 04 February 2026 02:47:19 +0000 (0:00:02.528) 0:00:37.115 **** 2026-02-04 02:47:21.960372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:47:21.960394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:49:02.624427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-04 02:49:02.624609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:49:02.624641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:49:02.624653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-04 02:49:02.624664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:49:02.624707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:49:02.624727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-04 02:49:02.624738 | orchestrator | 2026-02-04 02:49:02.624750 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 02:49:02.624762 | orchestrator | Wednesday 04 February 2026 02:47:21 +0000 (0:00:02.238) 0:00:39.354 **** 2026-02-04 02:49:02.624772 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:49:02.624783 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:49:02.624793 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:49:02.624803 | orchestrator | 2026-02-04 02:49:02.624813 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-04 02:49:02.624823 | orchestrator | Wednesday 04 February 2026 02:47:22 +0000 (0:00:00.524) 0:00:39.879 **** 2026-02-04 02:49:02.624833 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:49:02.624842 | orchestrator | 2026-02-04 02:49:02.624852 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-04 02:49:02.624862 | orchestrator | Wednesday 04 February 2026 02:47:24 +0000 (0:00:02.182) 0:00:42.061 **** 2026-02-04 02:49:02.624872 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:49:02.624881 | orchestrator | 2026-02-04 02:49:02.624891 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-04 02:49:02.624901 | orchestrator | Wednesday 04 February 2026 02:47:26 +0000 (0:00:02.194) 0:00:44.256 **** 2026-02-04 02:49:02.624911 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:49:02.624921 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:49:02.624930 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:49:02.624942 | orchestrator | 2026-02-04 02:49:02.624954 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-04 02:49:02.624966 | orchestrator | Wednesday 04 February 2026 02:47:27 +0000 (0:00:00.773) 0:00:45.029 **** 2026-02-04 02:49:02.624977 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:49:02.624989 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:49:02.625000 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:49:02.625011 | orchestrator | 2026-02-04 02:49:02.625023 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-04 02:49:02.625040 | orchestrator | Wednesday 04 February 2026 02:47:27 +0000 (0:00:00.325) 0:00:45.355 **** 2026-02-04 02:49:02.625051 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:49:02.625064 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:49:02.625075 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:49:02.625087 | orchestrator | 2026-02-04 02:49:02.625098 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-04 02:49:02.625109 | orchestrator | Wednesday 04 February 2026 02:47:28 +0000 (0:00:00.352) 0:00:45.708 **** 2026-02-04 02:49:02.625121 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:49:02.625132 | orchestrator | 2026-02-04 02:49:02.625143 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-04 02:49:02.625154 | orchestrator | Wednesday 04 February 2026 02:47:42 +0000 (0:00:14.593) 0:01:00.302 **** 2026-02-04 02:49:02.625166 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:49:02.625177 | orchestrator | 2026-02-04 02:49:02.625188 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 02:49:02.625199 | orchestrator | Wednesday 04 February 2026 02:47:52 +0000 (0:00:09.932) 0:01:10.234 **** 2026-02-04 02:49:02.625217 | orchestrator | 2026-02-04 02:49:02.625228 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 02:49:02.625240 | orchestrator | Wednesday 04 February 2026 02:47:52 +0000 (0:00:00.070) 0:01:10.304 **** 2026-02-04 02:49:02.625251 | orchestrator | 2026-02-04 02:49:02.625263 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-04 02:49:02.625274 | orchestrator | Wednesday 04 February 2026 02:47:52 +0000 (0:00:00.070) 0:01:10.375 **** 2026-02-04 02:49:02.625286 | orchestrator | 2026-02-04 02:49:02.625297 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-04 02:49:02.625307 | orchestrator | Wednesday 04 February 2026 02:47:53 +0000 (0:00:00.073) 0:01:10.448 **** 2026-02-04 02:49:02.625316 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:49:02.625326 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:49:02.625336 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:49:02.625346 | orchestrator | 2026-02-04 02:49:02.625356 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-04 02:49:02.625365 | orchestrator | Wednesday 04 February 2026 02:48:42 +0000 (0:00:49.207) 0:01:59.656 **** 2026-02-04 02:49:02.625375 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:49:02.625385 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:49:02.625394 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:49:02.625404 | orchestrator | 2026-02-04 02:49:02.625414 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-04 02:49:02.625424 | orchestrator | Wednesday 04 February 2026 02:48:49 +0000 (0:00:07.635) 0:02:07.292 **** 2026-02-04 02:49:02.625472 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:49:02.625483 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:49:02.625492 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:49:02.625502 | orchestrator | 2026-02-04 02:49:02.625512 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 02:49:02.625522 | orchestrator | Wednesday 04 February 2026 02:49:01 +0000 (0:00:12.087) 0:02:19.379 **** 2026-02-04 02:49:02.625538 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:49:50.541484 | orchestrator | 2026-02-04 02:49:50.541607 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-04 02:49:50.541627 | orchestrator | Wednesday 04 February 2026 02:49:02 +0000 (0:00:00.643) 0:02:20.022 **** 2026-02-04 02:49:50.541648 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:49:50.541669 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:49:50.541690 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:49:50.541710 | orchestrator | 2026-02-04 02:49:50.541729 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-04 02:49:50.541751 | orchestrator | Wednesday 04 February 2026 02:49:03 +0000 (0:00:00.775) 0:02:20.798 **** 2026-02-04 02:49:50.541772 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:49:50.541793 | orchestrator | 2026-02-04 02:49:50.541813 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-04 02:49:50.541832 | orchestrator | Wednesday 04 February 2026 02:49:05 +0000 (0:00:02.194) 0:02:22.993 **** 2026-02-04 02:49:50.541850 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-04 02:49:50.541862 | orchestrator | 2026-02-04 02:49:50.541873 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-04 02:49:50.541884 | orchestrator | Wednesday 04 February 2026 02:49:16 +0000 (0:00:10.795) 0:02:33.788 **** 2026-02-04 02:49:50.541895 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-04 02:49:50.541905 | orchestrator | 2026-02-04 02:49:50.541916 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-04 02:49:50.541927 | orchestrator | Wednesday 04 February 2026 02:49:39 +0000 (0:00:22.800) 0:02:56.589 **** 2026-02-04 02:49:50.541938 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-04 02:49:50.541977 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-04 02:49:50.541989 | orchestrator | 2026-02-04 02:49:50.542000 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-04 02:49:50.542011 | orchestrator | Wednesday 04 February 2026 02:49:45 +0000 (0:00:06.466) 0:03:03.055 **** 2026-02-04 02:49:50.542091 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:49:50.542103 | orchestrator | 2026-02-04 02:49:50.542114 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-04 02:49:50.542125 | orchestrator | Wednesday 04 February 2026 02:49:45 +0000 (0:00:00.128) 0:03:03.184 **** 2026-02-04 02:49:50.542136 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:49:50.542147 | orchestrator | 2026-02-04 02:49:50.542158 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-04 02:49:50.542169 | orchestrator | Wednesday 04 February 2026 02:49:45 +0000 (0:00:00.131) 0:03:03.315 **** 2026-02-04 02:49:50.542179 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:49:50.542190 | orchestrator | 2026-02-04 02:49:50.542214 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-04 02:49:50.542226 | orchestrator | Wednesday 04 February 2026 02:49:46 +0000 (0:00:00.136) 0:03:03.451 **** 2026-02-04 02:49:50.542236 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:49:50.542254 | orchestrator | 2026-02-04 02:49:50.542277 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-04 02:49:50.542306 | orchestrator | Wednesday 04 February 2026 02:49:46 +0000 (0:00:00.370) 0:03:03.821 **** 2026-02-04 02:49:50.542324 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:49:50.542342 | orchestrator | 2026-02-04 02:49:50.542361 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-04 02:49:50.542379 | orchestrator | Wednesday 04 February 2026 02:49:49 +0000 (0:00:03.257) 0:03:07.079 **** 2026-02-04 02:49:50.542431 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:49:50.542452 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:49:50.542471 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:49:50.542490 | orchestrator | 2026-02-04 02:49:50.542503 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:49:50.542515 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 02:49:50.542528 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 02:49:50.542539 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 02:49:50.542550 | orchestrator | 2026-02-04 02:49:50.542560 | orchestrator | 2026-02-04 02:49:50.542572 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:49:50.542582 | orchestrator | Wednesday 04 February 2026 02:49:50 +0000 (0:00:00.484) 0:03:07.563 **** 2026-02-04 02:49:50.542593 | orchestrator | =============================================================================== 2026-02-04 02:49:50.542604 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 49.21s 2026-02-04 02:49:50.542615 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.80s 2026-02-04 02:49:50.542626 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.59s 2026-02-04 02:49:50.542636 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.09s 2026-02-04 02:49:50.542647 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.80s 2026-02-04 02:49:50.542657 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.93s 2026-02-04 02:49:50.542668 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.49s 2026-02-04 02:49:50.542679 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.64s 2026-02-04 02:49:50.542703 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.47s 2026-02-04 02:49:50.542736 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.70s 2026-02-04 02:49:50.542747 | orchestrator | keystone : Creating default user role ----------------------------------- 3.26s 2026-02-04 02:49:50.542758 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.15s 2026-02-04 02:49:50.542769 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.03s 2026-02-04 02:49:50.542780 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.53s 2026-02-04 02:49:50.542790 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.24s 2026-02-04 02:49:50.542801 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.19s 2026-02-04 02:49:50.542812 | orchestrator | keystone : Run key distribution ----------------------------------------- 2.19s 2026-02-04 02:49:50.542822 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.18s 2026-02-04 02:49:50.542833 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.78s 2026-02-04 02:49:50.542844 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.66s 2026-02-04 02:49:52.883483 | orchestrator | 2026-02-04 02:49:52 | INFO  | Task 5f3fc896-34bb-4bac-8661-46a09b1779d4 (placement) was prepared for execution. 2026-02-04 02:49:52.883583 | orchestrator | 2026-02-04 02:49:52 | INFO  | It takes a moment until task 5f3fc896-34bb-4bac-8661-46a09b1779d4 (placement) has been started and output is visible here. 2026-02-04 02:50:26.566768 | orchestrator | 2026-02-04 02:50:26.566887 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:50:26.566904 | orchestrator | 2026-02-04 02:50:26.566916 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:50:26.566928 | orchestrator | Wednesday 04 February 2026 02:49:57 +0000 (0:00:00.260) 0:00:00.260 **** 2026-02-04 02:50:26.566939 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:50:26.566957 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:50:26.566979 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:50:26.567007 | orchestrator | 2026-02-04 02:50:26.567025 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:50:26.567042 | orchestrator | Wednesday 04 February 2026 02:49:57 +0000 (0:00:00.302) 0:00:00.563 **** 2026-02-04 02:50:26.567061 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-04 02:50:26.567079 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-04 02:50:26.567094 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-04 02:50:26.567110 | orchestrator | 2026-02-04 02:50:26.567148 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-04 02:50:26.567166 | orchestrator | 2026-02-04 02:50:26.567184 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 02:50:26.567200 | orchestrator | Wednesday 04 February 2026 02:49:57 +0000 (0:00:00.435) 0:00:00.998 **** 2026-02-04 02:50:26.567219 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:50:26.567238 | orchestrator | 2026-02-04 02:50:26.567256 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-04 02:50:26.567276 | orchestrator | Wednesday 04 February 2026 02:49:58 +0000 (0:00:00.541) 0:00:01.540 **** 2026-02-04 02:50:26.567295 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-04 02:50:26.567314 | orchestrator | 2026-02-04 02:50:26.567334 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-04 02:50:26.567356 | orchestrator | Wednesday 04 February 2026 02:50:02 +0000 (0:00:03.716) 0:00:05.257 **** 2026-02-04 02:50:26.567409 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-04 02:50:26.567460 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-04 02:50:26.567481 | orchestrator | 2026-02-04 02:50:26.567500 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-04 02:50:26.567517 | orchestrator | Wednesday 04 February 2026 02:50:08 +0000 (0:00:06.085) 0:00:11.342 **** 2026-02-04 02:50:26.567536 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-04 02:50:26.567603 | orchestrator | 2026-02-04 02:50:26.567639 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-04 02:50:26.567658 | orchestrator | Wednesday 04 February 2026 02:50:11 +0000 (0:00:03.441) 0:00:14.784 **** 2026-02-04 02:50:26.567677 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 02:50:26.567695 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-04 02:50:26.567712 | orchestrator | 2026-02-04 02:50:26.567730 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-04 02:50:26.567748 | orchestrator | Wednesday 04 February 2026 02:50:15 +0000 (0:00:03.986) 0:00:18.770 **** 2026-02-04 02:50:26.567765 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 02:50:26.567781 | orchestrator | 2026-02-04 02:50:26.567798 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-04 02:50:26.567814 | orchestrator | Wednesday 04 February 2026 02:50:18 +0000 (0:00:03.008) 0:00:21.778 **** 2026-02-04 02:50:26.567832 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-04 02:50:26.567850 | orchestrator | 2026-02-04 02:50:26.567867 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 02:50:26.567886 | orchestrator | Wednesday 04 February 2026 02:50:22 +0000 (0:00:03.834) 0:00:25.613 **** 2026-02-04 02:50:26.567904 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:50:26.567922 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:50:26.567941 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:50:26.567960 | orchestrator | 2026-02-04 02:50:26.567977 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-04 02:50:26.567995 | orchestrator | Wednesday 04 February 2026 02:50:22 +0000 (0:00:00.297) 0:00:25.911 **** 2026-02-04 02:50:26.568018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:26.568086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:26.568126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:26.568144 | orchestrator | 2026-02-04 02:50:26.568161 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-04 02:50:26.568177 | orchestrator | Wednesday 04 February 2026 02:50:23 +0000 (0:00:01.094) 0:00:27.006 **** 2026-02-04 02:50:26.568194 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:50:26.568211 | orchestrator | 2026-02-04 02:50:26.568229 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-04 02:50:26.568248 | orchestrator | Wednesday 04 February 2026 02:50:24 +0000 (0:00:00.338) 0:00:27.345 **** 2026-02-04 02:50:26.568265 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:50:26.568282 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:50:26.568302 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:50:26.568322 | orchestrator | 2026-02-04 02:50:26.568340 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-04 02:50:26.568360 | orchestrator | Wednesday 04 February 2026 02:50:24 +0000 (0:00:00.298) 0:00:27.643 **** 2026-02-04 02:50:26.568409 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:50:26.568429 | orchestrator | 2026-02-04 02:50:26.568447 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-04 02:50:26.568465 | orchestrator | Wednesday 04 February 2026 02:50:24 +0000 (0:00:00.573) 0:00:28.217 **** 2026-02-04 02:50:26.568484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:26.568522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:29.403409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:29.403514 | orchestrator | 2026-02-04 02:50:29.403532 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-04 02:50:29.403545 | orchestrator | Wednesday 04 February 2026 02:50:26 +0000 (0:00:01.565) 0:00:29.782 **** 2026-02-04 02:50:29.403558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:29.403571 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:50:29.403583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:29.403595 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:50:29.403606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:29.403643 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:50:29.403655 | orchestrator | 2026-02-04 02:50:29.403666 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-04 02:50:29.403695 | orchestrator | Wednesday 04 February 2026 02:50:27 +0000 (0:00:00.505) 0:00:30.288 **** 2026-02-04 02:50:29.403714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:29.403726 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:50:29.403738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:29.403749 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:50:29.403761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:29.403772 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:50:29.403782 | orchestrator | 2026-02-04 02:50:29.403793 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-04 02:50:29.403804 | orchestrator | Wednesday 04 February 2026 02:50:27 +0000 (0:00:00.711) 0:00:30.999 **** 2026-02-04 02:50:29.403815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:29.403849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:36.410674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:36.410807 | orchestrator | 2026-02-04 02:50:36.410828 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-04 02:50:36.410842 | orchestrator | Wednesday 04 February 2026 02:50:29 +0000 (0:00:01.624) 0:00:32.624 **** 2026-02-04 02:50:36.410934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:36.410950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:36.411003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:50:36.411016 | orchestrator | 2026-02-04 02:50:36.411028 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-04 02:50:36.411040 | orchestrator | Wednesday 04 February 2026 02:50:31 +0000 (0:00:02.391) 0:00:35.015 **** 2026-02-04 02:50:36.411070 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 02:50:36.411083 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 02:50:36.411095 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-04 02:50:36.411106 | orchestrator | 2026-02-04 02:50:36.411117 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-04 02:50:36.411128 | orchestrator | Wednesday 04 February 2026 02:50:33 +0000 (0:00:01.442) 0:00:36.458 **** 2026-02-04 02:50:36.411139 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:50:36.411151 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:50:36.411162 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:50:36.411173 | orchestrator | 2026-02-04 02:50:36.411184 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-04 02:50:36.411195 | orchestrator | Wednesday 04 February 2026 02:50:34 +0000 (0:00:01.395) 0:00:37.853 **** 2026-02-04 02:50:36.411210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:36.411224 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:50:36.411238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:36.411259 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:50:36.411273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-04 02:50:36.411287 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:50:36.411300 | orchestrator | 2026-02-04 02:50:36.411314 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-04 02:50:36.411330 | orchestrator | Wednesday 04 February 2026 02:50:35 +0000 (0:00:00.733) 0:00:38.587 **** 2026-02-04 02:50:36.411352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:51:03.776100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:51:03.776259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-04 02:51:03.776277 | orchestrator | 2026-02-04 02:51:03.776291 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-04 02:51:03.776304 | orchestrator | Wednesday 04 February 2026 02:50:36 +0000 (0:00:01.048) 0:00:39.636 **** 2026-02-04 02:51:03.776315 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:51:03.776327 | orchestrator | 2026-02-04 02:51:03.776338 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-04 02:51:03.776406 | orchestrator | Wednesday 04 February 2026 02:50:38 +0000 (0:00:02.019) 0:00:41.655 **** 2026-02-04 02:51:03.776418 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:51:03.776429 | orchestrator | 2026-02-04 02:51:03.776441 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-04 02:51:03.776452 | orchestrator | Wednesday 04 February 2026 02:50:40 +0000 (0:00:02.130) 0:00:43.785 **** 2026-02-04 02:51:03.776462 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:51:03.776473 | orchestrator | 2026-02-04 02:51:03.776484 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 02:51:03.776495 | orchestrator | Wednesday 04 February 2026 02:50:53 +0000 (0:00:12.777) 0:00:56.563 **** 2026-02-04 02:51:03.776506 | orchestrator | 2026-02-04 02:51:03.776516 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 02:51:03.776527 | orchestrator | Wednesday 04 February 2026 02:50:53 +0000 (0:00:00.068) 0:00:56.631 **** 2026-02-04 02:51:03.776537 | orchestrator | 2026-02-04 02:51:03.776548 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-04 02:51:03.776559 | orchestrator | Wednesday 04 February 2026 02:50:53 +0000 (0:00:00.067) 0:00:56.698 **** 2026-02-04 02:51:03.776569 | orchestrator | 2026-02-04 02:51:03.776580 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-04 02:51:03.776591 | orchestrator | Wednesday 04 February 2026 02:50:53 +0000 (0:00:00.075) 0:00:56.774 **** 2026-02-04 02:51:03.776601 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:51:03.776627 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:51:03.776638 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:51:03.776649 | orchestrator | 2026-02-04 02:51:03.776660 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:51:03.776672 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 02:51:03.776684 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 02:51:03.776695 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 02:51:03.776706 | orchestrator | 2026-02-04 02:51:03.776717 | orchestrator | 2026-02-04 02:51:03.776728 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:51:03.776739 | orchestrator | Wednesday 04 February 2026 02:51:03 +0000 (0:00:09.832) 0:01:06.606 **** 2026-02-04 02:51:03.776758 | orchestrator | =============================================================================== 2026-02-04 02:51:03.776769 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.78s 2026-02-04 02:51:03.776799 | orchestrator | placement : Restart placement-api container ----------------------------- 9.83s 2026-02-04 02:51:03.776810 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.09s 2026-02-04 02:51:03.776822 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.99s 2026-02-04 02:51:03.776833 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.83s 2026-02-04 02:51:03.776844 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.72s 2026-02-04 02:51:03.776854 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.44s 2026-02-04 02:51:03.776865 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.01s 2026-02-04 02:51:03.776876 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.39s 2026-02-04 02:51:03.776886 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.13s 2026-02-04 02:51:03.776897 | orchestrator | placement : Creating placement databases -------------------------------- 2.02s 2026-02-04 02:51:03.776908 | orchestrator | placement : Copying over config.json files for services ----------------- 1.62s 2026-02-04 02:51:03.776918 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.57s 2026-02-04 02:51:03.776929 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.44s 2026-02-04 02:51:03.776940 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.40s 2026-02-04 02:51:03.776951 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.09s 2026-02-04 02:51:03.776961 | orchestrator | placement : Check placement containers ---------------------------------- 1.05s 2026-02-04 02:51:03.776972 | orchestrator | placement : Copying over existing policy file --------------------------- 0.73s 2026-02-04 02:51:03.776983 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2026-02-04 02:51:03.776993 | orchestrator | placement : include_tasks ----------------------------------------------- 0.57s 2026-02-04 02:51:06.138276 | orchestrator | 2026-02-04 02:51:06 | INFO  | Task 1f7147d4-d93a-4915-8500-66d3b99d6540 (neutron) was prepared for execution. 2026-02-04 02:51:06.138421 | orchestrator | 2026-02-04 02:51:06 | INFO  | It takes a moment until task 1f7147d4-d93a-4915-8500-66d3b99d6540 (neutron) has been started and output is visible here. 2026-02-04 02:51:53.460502 | orchestrator | 2026-02-04 02:51:53.460647 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:51:53.460700 | orchestrator | 2026-02-04 02:51:53.460713 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:51:53.460725 | orchestrator | Wednesday 04 February 2026 02:51:10 +0000 (0:00:00.284) 0:00:00.284 **** 2026-02-04 02:51:53.460737 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:51:53.460749 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:51:53.460760 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:51:53.460771 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:51:53.460782 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:51:53.460793 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:51:53.460803 | orchestrator | 2026-02-04 02:51:53.460814 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:51:53.460825 | orchestrator | Wednesday 04 February 2026 02:51:11 +0000 (0:00:00.696) 0:00:00.980 **** 2026-02-04 02:51:53.460836 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-04 02:51:53.460848 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-04 02:51:53.460859 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-04 02:51:53.460870 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-04 02:51:53.460880 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-04 02:51:53.460917 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-04 02:51:53.460928 | orchestrator | 2026-02-04 02:51:53.460939 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-04 02:51:53.460950 | orchestrator | 2026-02-04 02:51:53.460963 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 02:51:53.460977 | orchestrator | Wednesday 04 February 2026 02:51:11 +0000 (0:00:00.615) 0:00:01.596 **** 2026-02-04 02:51:53.461006 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:51:53.461021 | orchestrator | 2026-02-04 02:51:53.461034 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-04 02:51:53.461048 | orchestrator | Wednesday 04 February 2026 02:51:12 +0000 (0:00:01.256) 0:00:02.852 **** 2026-02-04 02:51:53.461060 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:51:53.461074 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:51:53.461087 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:51:53.461098 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:51:53.461110 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:51:53.461121 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:51:53.461131 | orchestrator | 2026-02-04 02:51:53.461142 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-04 02:51:53.461153 | orchestrator | Wednesday 04 February 2026 02:51:14 +0000 (0:00:01.290) 0:00:04.143 **** 2026-02-04 02:51:53.461164 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:51:53.461175 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:51:53.461185 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:51:53.461196 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:51:53.461207 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:51:53.461217 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:51:53.461228 | orchestrator | 2026-02-04 02:51:53.461242 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-04 02:51:53.461261 | orchestrator | Wednesday 04 February 2026 02:51:15 +0000 (0:00:01.070) 0:00:05.213 **** 2026-02-04 02:51:53.461280 | orchestrator | ok: [testbed-node-0] => { 2026-02-04 02:51:53.461301 | orchestrator |  "changed": false, 2026-02-04 02:51:53.461318 | orchestrator |  "msg": "All assertions passed" 2026-02-04 02:51:53.461368 | orchestrator | } 2026-02-04 02:51:53.461389 | orchestrator | ok: [testbed-node-1] => { 2026-02-04 02:51:53.461409 | orchestrator |  "changed": false, 2026-02-04 02:51:53.461427 | orchestrator |  "msg": "All assertions passed" 2026-02-04 02:51:53.461444 | orchestrator | } 2026-02-04 02:51:53.461461 | orchestrator | ok: [testbed-node-2] => { 2026-02-04 02:51:53.461479 | orchestrator |  "changed": false, 2026-02-04 02:51:53.461497 | orchestrator |  "msg": "All assertions passed" 2026-02-04 02:51:53.461516 | orchestrator | } 2026-02-04 02:51:53.461534 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 02:51:53.461552 | orchestrator |  "changed": false, 2026-02-04 02:51:53.461571 | orchestrator |  "msg": "All assertions passed" 2026-02-04 02:51:53.461589 | orchestrator | } 2026-02-04 02:51:53.461605 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 02:51:53.461622 | orchestrator |  "changed": false, 2026-02-04 02:51:53.461639 | orchestrator |  "msg": "All assertions passed" 2026-02-04 02:51:53.461656 | orchestrator | } 2026-02-04 02:51:53.461672 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 02:51:53.461690 | orchestrator |  "changed": false, 2026-02-04 02:51:53.461709 | orchestrator |  "msg": "All assertions passed" 2026-02-04 02:51:53.461727 | orchestrator | } 2026-02-04 02:51:53.461745 | orchestrator | 2026-02-04 02:51:53.461764 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-04 02:51:53.461784 | orchestrator | Wednesday 04 February 2026 02:51:16 +0000 (0:00:00.866) 0:00:06.079 **** 2026-02-04 02:51:53.461802 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:51:53.461817 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:51:53.461827 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:51:53.461851 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:51:53.461862 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:51:53.461873 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:51:53.461884 | orchestrator | 2026-02-04 02:51:53.461895 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-04 02:51:53.461906 | orchestrator | Wednesday 04 February 2026 02:51:16 +0000 (0:00:00.638) 0:00:06.717 **** 2026-02-04 02:51:53.461917 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-04 02:51:53.461927 | orchestrator | 2026-02-04 02:51:53.461938 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-04 02:51:53.461949 | orchestrator | Wednesday 04 February 2026 02:51:20 +0000 (0:00:03.722) 0:00:10.440 **** 2026-02-04 02:51:53.461960 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-04 02:51:53.461977 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-04 02:51:53.461997 | orchestrator | 2026-02-04 02:51:53.462113 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-04 02:51:53.462135 | orchestrator | Wednesday 04 February 2026 02:51:26 +0000 (0:00:06.382) 0:00:16.823 **** 2026-02-04 02:51:53.462147 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 02:51:53.462158 | orchestrator | 2026-02-04 02:51:53.462169 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-04 02:51:53.462180 | orchestrator | Wednesday 04 February 2026 02:51:29 +0000 (0:00:03.119) 0:00:19.943 **** 2026-02-04 02:51:53.462191 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 02:51:53.462202 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-04 02:51:53.462213 | orchestrator | 2026-02-04 02:51:53.462224 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-04 02:51:53.462234 | orchestrator | Wednesday 04 February 2026 02:51:33 +0000 (0:00:03.989) 0:00:23.932 **** 2026-02-04 02:51:53.462245 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 02:51:53.462256 | orchestrator | 2026-02-04 02:51:53.462267 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-04 02:51:53.462277 | orchestrator | Wednesday 04 February 2026 02:51:37 +0000 (0:00:03.096) 0:00:27.028 **** 2026-02-04 02:51:53.462288 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-04 02:51:53.462299 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-04 02:51:53.462310 | orchestrator | 2026-02-04 02:51:53.462320 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 02:51:53.462331 | orchestrator | Wednesday 04 February 2026 02:51:44 +0000 (0:00:07.453) 0:00:34.482 **** 2026-02-04 02:51:53.462376 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:51:53.462388 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:51:53.462399 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:51:53.462424 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:51:53.462435 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:51:53.462458 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:51:53.462477 | orchestrator | 2026-02-04 02:51:53.462498 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-04 02:51:53.462516 | orchestrator | Wednesday 04 February 2026 02:51:45 +0000 (0:00:00.852) 0:00:35.334 **** 2026-02-04 02:51:53.462536 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:51:53.462556 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:51:53.462576 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:51:53.462597 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:51:53.462617 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:51:53.462637 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:51:53.462656 | orchestrator | 2026-02-04 02:51:53.462675 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-04 02:51:53.462695 | orchestrator | Wednesday 04 February 2026 02:51:47 +0000 (0:00:02.168) 0:00:37.503 **** 2026-02-04 02:51:53.462729 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:51:53.462749 | orchestrator | ok: [testbed-node-1] 2026-02-04 02:51:53.462769 | orchestrator | ok: [testbed-node-2] 2026-02-04 02:51:53.462788 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:51:53.462799 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:51:53.462810 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:51:53.462821 | orchestrator | 2026-02-04 02:51:53.462832 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-04 02:51:53.462842 | orchestrator | Wednesday 04 February 2026 02:51:48 +0000 (0:00:01.184) 0:00:38.687 **** 2026-02-04 02:51:53.462853 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:51:53.462864 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:51:53.462875 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:51:53.462885 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:51:53.462896 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:51:53.462914 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:51:53.462932 | orchestrator | 2026-02-04 02:51:53.462951 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-04 02:51:53.462968 | orchestrator | Wednesday 04 February 2026 02:51:50 +0000 (0:00:02.175) 0:00:40.863 **** 2026-02-04 02:51:53.462992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:51:53.463036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:51:58.911330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:51:58.911615 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:51:58.911647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:51:58.911669 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:51:58.911690 | orchestrator | 2026-02-04 02:51:58.911713 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-04 02:51:58.911735 | orchestrator | Wednesday 04 February 2026 02:51:53 +0000 (0:00:02.536) 0:00:43.399 **** 2026-02-04 02:51:58.911755 | orchestrator | [WARNING]: Skipped 2026-02-04 02:51:58.911776 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-04 02:51:58.911796 | orchestrator | due to this access issue: 2026-02-04 02:51:58.911818 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-04 02:51:58.911837 | orchestrator | a directory 2026-02-04 02:51:58.911857 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 02:51:58.911877 | orchestrator | 2026-02-04 02:51:58.911897 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 02:51:58.911916 | orchestrator | Wednesday 04 February 2026 02:51:54 +0000 (0:00:00.840) 0:00:44.240 **** 2026-02-04 02:51:58.911937 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:51:58.911959 | orchestrator | 2026-02-04 02:51:58.911979 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-04 02:51:58.912022 | orchestrator | Wednesday 04 February 2026 02:51:55 +0000 (0:00:01.304) 0:00:45.545 **** 2026-02-04 02:51:58.912054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:51:58.912091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:51:58.912113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:51:58.912136 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:51:58.912171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:52:03.776086 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:52:03.776208 | orchestrator | 2026-02-04 02:52:03.776223 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-04 02:52:03.776234 | orchestrator | Wednesday 04 February 2026 02:51:58 +0000 (0:00:03.301) 0:00:48.847 **** 2026-02-04 02:52:03.776248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:03.776266 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:03.776276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:03.776400 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:03.776414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:03.776424 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:03.776470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:03.776489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:03.776499 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:03.776508 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:03.776517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:03.776526 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:03.776535 | orchestrator | 2026-02-04 02:52:03.776544 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-04 02:52:03.776553 | orchestrator | Wednesday 04 February 2026 02:52:00 +0000 (0:00:01.948) 0:00:50.796 **** 2026-02-04 02:52:03.776562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:03.776571 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:03.776581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:03.776595 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:03.776615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:09.238625 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:09.238778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:09.238813 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:09.238836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:09.238870 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:09.238891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:09.238942 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:09.238962 | orchestrator | 2026-02-04 02:52:09.238982 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-04 02:52:09.239003 | orchestrator | Wednesday 04 February 2026 02:52:03 +0000 (0:00:02.915) 0:00:53.711 **** 2026-02-04 02:52:09.239023 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:09.239043 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:09.239063 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:09.239082 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:09.239098 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:09.239109 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:09.239119 | orchestrator | 2026-02-04 02:52:09.239130 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-04 02:52:09.239142 | orchestrator | Wednesday 04 February 2026 02:52:06 +0000 (0:00:02.408) 0:00:56.120 **** 2026-02-04 02:52:09.239155 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:09.239168 | orchestrator | 2026-02-04 02:52:09.239181 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-04 02:52:09.239193 | orchestrator | Wednesday 04 February 2026 02:52:06 +0000 (0:00:00.146) 0:00:56.267 **** 2026-02-04 02:52:09.239206 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:09.239218 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:09.239231 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:09.239243 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:09.239256 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:09.239266 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:09.239277 | orchestrator | 2026-02-04 02:52:09.239288 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-04 02:52:09.239299 | orchestrator | Wednesday 04 February 2026 02:52:06 +0000 (0:00:00.624) 0:00:56.891 **** 2026-02-04 02:52:09.239375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:09.239389 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:09.239401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:09.239412 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:09.239433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:09.239444 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:09.239456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:09.239467 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:09.239483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:09.239495 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:09.239516 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:17.329407 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:17.329515 | orchestrator | 2026-02-04 02:52:17.329532 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-04 02:52:17.329546 | orchestrator | Wednesday 04 February 2026 02:52:09 +0000 (0:00:02.284) 0:00:59.176 **** 2026-02-04 02:52:17.329559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:17.329598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:17.329611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:52:17.329640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:17.329672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:52:17.329692 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:52:17.329704 | orchestrator | 2026-02-04 02:52:17.329716 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-04 02:52:17.329727 | orchestrator | Wednesday 04 February 2026 02:52:12 +0000 (0:00:02.955) 0:01:02.131 **** 2026-02-04 02:52:17.329769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:17.329783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:17.329801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:17.329821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:52:25.339094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:52:25.339206 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:52:25.339220 | orchestrator | 2026-02-04 02:52:25.339229 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-04 02:52:25.339238 | orchestrator | Wednesday 04 February 2026 02:52:17 +0000 (0:00:05.136) 0:01:07.267 **** 2026-02-04 02:52:25.339253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:25.339276 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:25.339285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:25.339312 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:25.339356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:25.339364 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:25.339371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:25.339378 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:25.339386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:25.339394 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:25.339406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:25.339414 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:25.339421 | orchestrator | 2026-02-04 02:52:25.339427 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-04 02:52:25.339441 | orchestrator | Wednesday 04 February 2026 02:52:19 +0000 (0:00:02.147) 0:01:09.415 **** 2026-02-04 02:52:25.339448 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:25.339456 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:25.339463 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:25.339471 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:52:25.339478 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:52:25.339485 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:52:25.339491 | orchestrator | 2026-02-04 02:52:25.339499 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-04 02:52:25.339506 | orchestrator | Wednesday 04 February 2026 02:52:22 +0000 (0:00:02.727) 0:01:12.142 **** 2026-02-04 02:52:25.339522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:42.940953 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:42.941070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:42.941086 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:42.941096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:42.941105 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:42.941114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:42.941156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:42.941181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:52:42.941190 | orchestrator | 2026-02-04 02:52:42.941199 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-04 02:52:42.941208 | orchestrator | Wednesday 04 February 2026 02:52:25 +0000 (0:00:03.143) 0:01:15.285 **** 2026-02-04 02:52:42.941216 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:42.941224 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:42.941231 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:42.941239 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:42.941246 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:42.941254 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:42.941261 | orchestrator | 2026-02-04 02:52:42.941269 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-04 02:52:42.941277 | orchestrator | Wednesday 04 February 2026 02:52:27 +0000 (0:00:02.314) 0:01:17.599 **** 2026-02-04 02:52:42.941285 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:42.941292 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:42.941300 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:42.941307 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:42.941315 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:42.941400 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:42.941411 | orchestrator | 2026-02-04 02:52:42.941418 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-04 02:52:42.941426 | orchestrator | Wednesday 04 February 2026 02:52:29 +0000 (0:00:02.006) 0:01:19.606 **** 2026-02-04 02:52:42.941433 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:42.941441 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:42.941448 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:42.941455 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:42.941462 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:42.941470 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:42.941477 | orchestrator | 2026-02-04 02:52:42.941484 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-04 02:52:42.941498 | orchestrator | Wednesday 04 February 2026 02:52:31 +0000 (0:00:02.120) 0:01:21.726 **** 2026-02-04 02:52:42.941519 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:42.941527 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:42.941544 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:42.941553 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:42.941561 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:42.941569 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:42.941577 | orchestrator | 2026-02-04 02:52:42.941586 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-04 02:52:42.941594 | orchestrator | Wednesday 04 February 2026 02:52:34 +0000 (0:00:02.294) 0:01:24.021 **** 2026-02-04 02:52:42.941603 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:42.941611 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:42.941619 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:42.941627 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:42.941635 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:42.941644 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:42.941652 | orchestrator | 2026-02-04 02:52:42.941660 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-04 02:52:42.941669 | orchestrator | Wednesday 04 February 2026 02:52:36 +0000 (0:00:02.129) 0:01:26.150 **** 2026-02-04 02:52:42.941677 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:42.941689 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:42.941700 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:42.941709 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:42.941722 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:42.941732 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:42.941740 | orchestrator | 2026-02-04 02:52:42.941748 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-04 02:52:42.941757 | orchestrator | Wednesday 04 February 2026 02:52:38 +0000 (0:00:02.171) 0:01:28.322 **** 2026-02-04 02:52:42.941765 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 02:52:42.941774 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:42.941783 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 02:52:42.941792 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:42.941800 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 02:52:42.941809 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:42.941817 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 02:52:42.941826 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:42.941838 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 02:52:42.941849 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:42.941857 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-04 02:52:42.941866 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:42.941874 | orchestrator | 2026-02-04 02:52:42.941881 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-04 02:52:42.941889 | orchestrator | Wednesday 04 February 2026 02:52:40 +0000 (0:00:02.281) 0:01:30.604 **** 2026-02-04 02:52:42.941905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:45.062268 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:45.062400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:45.062419 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:52:45.062432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:45.062445 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:45.062474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:45.062486 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:45.062498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:45.062531 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:52:45.062561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:45.062574 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:45.062585 | orchestrator | 2026-02-04 02:52:45.062597 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-04 02:52:45.062610 | orchestrator | Wednesday 04 February 2026 02:52:42 +0000 (0:00:02.276) 0:01:32.880 **** 2026-02-04 02:52:45.062621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:45.062633 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:52:45.062649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:45.062661 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:52:45.062673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:52:45.062691 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:52:45.062702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:52:45.062714 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:52:45.062732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:53:10.984596 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.984707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:53:10.984740 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.984748 | orchestrator | 2026-02-04 02:53:10.984756 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-04 02:53:10.984763 | orchestrator | Wednesday 04 February 2026 02:52:45 +0000 (0:00:02.115) 0:01:34.995 **** 2026-02-04 02:53:10.984769 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.984773 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.984777 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.984781 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.984786 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.984790 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.984794 | orchestrator | 2026-02-04 02:53:10.984809 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-04 02:53:10.984813 | orchestrator | Wednesday 04 February 2026 02:52:47 +0000 (0:00:02.093) 0:01:37.089 **** 2026-02-04 02:53:10.984817 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.984821 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.984825 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.984829 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:53:10.984833 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:53:10.984836 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:53:10.984840 | orchestrator | 2026-02-04 02:53:10.984844 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-04 02:53:10.984865 | orchestrator | Wednesday 04 February 2026 02:52:50 +0000 (0:00:03.802) 0:01:40.891 **** 2026-02-04 02:53:10.984869 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.984873 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.984876 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.984880 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.984884 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.984888 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.984891 | orchestrator | 2026-02-04 02:53:10.984895 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-04 02:53:10.984899 | orchestrator | Wednesday 04 February 2026 02:52:53 +0000 (0:00:02.147) 0:01:43.039 **** 2026-02-04 02:53:10.984903 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.984906 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.984910 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.984914 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.984918 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.984921 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.984925 | orchestrator | 2026-02-04 02:53:10.984929 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-04 02:53:10.984933 | orchestrator | Wednesday 04 February 2026 02:52:55 +0000 (0:00:02.185) 0:01:45.224 **** 2026-02-04 02:53:10.984936 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.984940 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.984944 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.984947 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.984951 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.984955 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.984958 | orchestrator | 2026-02-04 02:53:10.984962 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-04 02:53:10.984966 | orchestrator | Wednesday 04 February 2026 02:52:57 +0000 (0:00:02.108) 0:01:47.333 **** 2026-02-04 02:53:10.984970 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.984973 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.984977 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.984981 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.984985 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.984988 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.984992 | orchestrator | 2026-02-04 02:53:10.984996 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-04 02:53:10.985000 | orchestrator | Wednesday 04 February 2026 02:52:59 +0000 (0:00:02.269) 0:01:49.603 **** 2026-02-04 02:53:10.985003 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.985007 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.985011 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.985015 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.985018 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.985022 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.985026 | orchestrator | 2026-02-04 02:53:10.985030 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-04 02:53:10.985033 | orchestrator | Wednesday 04 February 2026 02:53:01 +0000 (0:00:02.195) 0:01:51.798 **** 2026-02-04 02:53:10.985037 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.985041 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.985045 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.985048 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.985052 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.985056 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.985059 | orchestrator | 2026-02-04 02:53:10.985063 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-04 02:53:10.985079 | orchestrator | Wednesday 04 February 2026 02:53:04 +0000 (0:00:02.222) 0:01:54.020 **** 2026-02-04 02:53:10.985085 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.985096 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.985102 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.985108 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.985113 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.985119 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.985125 | orchestrator | 2026-02-04 02:53:10.985131 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-04 02:53:10.985137 | orchestrator | Wednesday 04 February 2026 02:53:06 +0000 (0:00:02.241) 0:01:56.262 **** 2026-02-04 02:53:10.985143 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 02:53:10.985151 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.985158 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 02:53:10.985164 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.985171 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 02:53:10.985177 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:10.985181 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 02:53:10.985186 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:10.985193 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 02:53:10.985200 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:10.985206 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-04 02:53:10.985216 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:10.985223 | orchestrator | 2026-02-04 02:53:10.985230 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-04 02:53:10.985236 | orchestrator | Wednesday 04 February 2026 02:53:08 +0000 (0:00:02.336) 0:01:58.599 **** 2026-02-04 02:53:10.985244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:53:10.985252 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:10.985259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:53:10.985266 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:10.985284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-04 02:53:16.856280 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:16.856431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:53:16.856452 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:16.856481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:53:16.856494 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:16.856506 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 02:53:16.856518 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:16.856529 | orchestrator | 2026-02-04 02:53:16.856541 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-04 02:53:16.856554 | orchestrator | Wednesday 04 February 2026 02:53:10 +0000 (0:00:02.318) 0:02:00.917 **** 2026-02-04 02:53:16.856566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:53:16.856621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:53:16.856640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-04 02:53:16.856653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:53:16.856665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:53:16.856686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-04 02:53:16.856698 | orchestrator | 2026-02-04 02:53:16.856710 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-04 02:53:16.856721 | orchestrator | Wednesday 04 February 2026 02:53:13 +0000 (0:00:02.778) 0:02:03.695 **** 2026-02-04 02:53:16.856732 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:53:16.856743 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:53:16.856754 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:53:16.856765 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:53:16.856776 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:53:16.856787 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:53:16.856797 | orchestrator | 2026-02-04 02:53:16.856809 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-04 02:53:16.856820 | orchestrator | Wednesday 04 February 2026 02:53:14 +0000 (0:00:00.636) 0:02:04.332 **** 2026-02-04 02:53:16.856837 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:55:16.773878 | orchestrator | 2026-02-04 02:55:16.774084 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-04 02:55:16.774109 | orchestrator | Wednesday 04 February 2026 02:53:16 +0000 (0:00:02.467) 0:02:06.799 **** 2026-02-04 02:55:16.774121 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:55:16.774134 | orchestrator | 2026-02-04 02:55:16.774153 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-04 02:55:16.774165 | orchestrator | Wednesday 04 February 2026 02:53:18 +0000 (0:00:02.143) 0:02:08.943 **** 2026-02-04 02:55:16.774177 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:55:16.774189 | orchestrator | 2026-02-04 02:55:16.774200 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 02:55:16.774212 | orchestrator | Wednesday 04 February 2026 02:53:58 +0000 (0:00:39.443) 0:02:48.387 **** 2026-02-04 02:55:16.774223 | orchestrator | 2026-02-04 02:55:16.774234 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 02:55:16.774245 | orchestrator | Wednesday 04 February 2026 02:53:58 +0000 (0:00:00.073) 0:02:48.460 **** 2026-02-04 02:55:16.774256 | orchestrator | 2026-02-04 02:55:16.774268 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 02:55:16.774279 | orchestrator | Wednesday 04 February 2026 02:53:58 +0000 (0:00:00.070) 0:02:48.530 **** 2026-02-04 02:55:16.774290 | orchestrator | 2026-02-04 02:55:16.774301 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 02:55:16.774312 | orchestrator | Wednesday 04 February 2026 02:53:58 +0000 (0:00:00.070) 0:02:48.601 **** 2026-02-04 02:55:16.774347 | orchestrator | 2026-02-04 02:55:16.774376 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 02:55:16.774388 | orchestrator | Wednesday 04 February 2026 02:53:58 +0000 (0:00:00.071) 0:02:48.672 **** 2026-02-04 02:55:16.774399 | orchestrator | 2026-02-04 02:55:16.774410 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-04 02:55:16.774421 | orchestrator | Wednesday 04 February 2026 02:53:58 +0000 (0:00:00.072) 0:02:48.745 **** 2026-02-04 02:55:16.774431 | orchestrator | 2026-02-04 02:55:16.774443 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-04 02:55:16.774455 | orchestrator | Wednesday 04 February 2026 02:53:58 +0000 (0:00:00.071) 0:02:48.817 **** 2026-02-04 02:55:16.774491 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:55:16.774503 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:55:16.774514 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:55:16.774525 | orchestrator | 2026-02-04 02:55:16.774537 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-04 02:55:16.774548 | orchestrator | Wednesday 04 February 2026 02:54:20 +0000 (0:00:21.796) 0:03:10.613 **** 2026-02-04 02:55:16.774559 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:55:16.774570 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:55:16.774582 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:55:16.774593 | orchestrator | 2026-02-04 02:55:16.774604 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 02:55:16.774616 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 02:55:16.774630 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-04 02:55:16.774643 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-04 02:55:16.774654 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 02:55:16.774665 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 02:55:16.774676 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-04 02:55:16.774688 | orchestrator | 2026-02-04 02:55:16.774700 | orchestrator | 2026-02-04 02:55:16.774712 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 02:55:16.774723 | orchestrator | Wednesday 04 February 2026 02:55:16 +0000 (0:00:55.648) 0:04:06.262 **** 2026-02-04 02:55:16.774734 | orchestrator | =============================================================================== 2026-02-04 02:55:16.774744 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.65s 2026-02-04 02:55:16.774755 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.44s 2026-02-04 02:55:16.774766 | orchestrator | neutron : Restart neutron-server container ----------------------------- 21.80s 2026-02-04 02:55:16.774777 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.45s 2026-02-04 02:55:16.774788 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.38s 2026-02-04 02:55:16.774800 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.14s 2026-02-04 02:55:16.774810 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 3.99s 2026-02-04 02:55:16.774821 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.80s 2026-02-04 02:55:16.774832 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.72s 2026-02-04 02:55:16.774842 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.30s 2026-02-04 02:55:16.774875 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.14s 2026-02-04 02:55:16.774887 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.12s 2026-02-04 02:55:16.774897 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.10s 2026-02-04 02:55:16.774908 | orchestrator | neutron : Copying over config.json files for services ------------------- 2.96s 2026-02-04 02:55:16.774919 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.92s 2026-02-04 02:55:16.774931 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.78s 2026-02-04 02:55:16.774951 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.73s 2026-02-04 02:55:16.774962 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.54s 2026-02-04 02:55:16.774973 | orchestrator | neutron : Creating Neutron database ------------------------------------- 2.47s 2026-02-04 02:55:16.774984 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 2.41s 2026-02-04 02:55:19.155177 | orchestrator | 2026-02-04 02:55:19 | INFO  | Task 99d395ed-0c26-48ba-9cae-7e4b049cbc2b (nova) was prepared for execution. 2026-02-04 02:55:19.155275 | orchestrator | 2026-02-04 02:55:19 | INFO  | It takes a moment until task 99d395ed-0c26-48ba-9cae-7e4b049cbc2b (nova) has been started and output is visible here. 2026-02-04 02:57:13.206991 | orchestrator | 2026-02-04 02:57:13.207160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 02:57:13.207191 | orchestrator | 2026-02-04 02:57:13.207212 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-04 02:57:13.207226 | orchestrator | Wednesday 04 February 2026 02:55:23 +0000 (0:00:00.285) 0:00:00.285 **** 2026-02-04 02:57:13.207237 | orchestrator | changed: [testbed-manager] 2026-02-04 02:57:13.207271 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.207282 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:57:13.207293 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:57:13.207342 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:57:13.207360 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:57:13.207378 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:57:13.207397 | orchestrator | 2026-02-04 02:57:13.207414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 02:57:13.207433 | orchestrator | Wednesday 04 February 2026 02:55:24 +0000 (0:00:00.872) 0:00:01.158 **** 2026-02-04 02:57:13.207453 | orchestrator | changed: [testbed-manager] 2026-02-04 02:57:13.207471 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.207490 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:57:13.207510 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:57:13.207528 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:57:13.207547 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:57:13.207566 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:57:13.207585 | orchestrator | 2026-02-04 02:57:13.207604 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 02:57:13.207626 | orchestrator | Wednesday 04 February 2026 02:55:25 +0000 (0:00:00.903) 0:00:02.061 **** 2026-02-04 02:57:13.207645 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-04 02:57:13.207665 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-04 02:57:13.207684 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-04 02:57:13.207705 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-04 02:57:13.207723 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-04 02:57:13.207744 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-04 02:57:13.207757 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-04 02:57:13.207770 | orchestrator | 2026-02-04 02:57:13.207783 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-04 02:57:13.207796 | orchestrator | 2026-02-04 02:57:13.207809 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-04 02:57:13.207822 | orchestrator | Wednesday 04 February 2026 02:55:26 +0000 (0:00:00.753) 0:00:02.815 **** 2026-02-04 02:57:13.207835 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:57:13.207847 | orchestrator | 2026-02-04 02:57:13.207860 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-04 02:57:13.207873 | orchestrator | Wednesday 04 February 2026 02:55:26 +0000 (0:00:00.804) 0:00:03.619 **** 2026-02-04 02:57:13.207886 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-04 02:57:13.207923 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-04 02:57:13.207934 | orchestrator | 2026-02-04 02:57:13.207946 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-04 02:57:13.207956 | orchestrator | Wednesday 04 February 2026 02:55:30 +0000 (0:00:03.881) 0:00:07.501 **** 2026-02-04 02:57:13.207967 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 02:57:13.207979 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 02:57:13.207989 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.208000 | orchestrator | 2026-02-04 02:57:13.208011 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-04 02:57:13.208022 | orchestrator | Wednesday 04 February 2026 02:55:34 +0000 (0:00:04.130) 0:00:11.631 **** 2026-02-04 02:57:13.208033 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.208044 | orchestrator | 2026-02-04 02:57:13.208055 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-04 02:57:13.208066 | orchestrator | Wednesday 04 February 2026 02:55:35 +0000 (0:00:00.620) 0:00:12.251 **** 2026-02-04 02:57:13.208077 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.208088 | orchestrator | 2026-02-04 02:57:13.208099 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-04 02:57:13.208109 | orchestrator | Wednesday 04 February 2026 02:55:36 +0000 (0:00:01.241) 0:00:13.493 **** 2026-02-04 02:57:13.208120 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.208131 | orchestrator | 2026-02-04 02:57:13.208142 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 02:57:13.208152 | orchestrator | Wednesday 04 February 2026 02:55:39 +0000 (0:00:02.592) 0:00:16.086 **** 2026-02-04 02:57:13.208163 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:57:13.208174 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.208185 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.208196 | orchestrator | 2026-02-04 02:57:13.208206 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-04 02:57:13.208217 | orchestrator | Wednesday 04 February 2026 02:55:39 +0000 (0:00:00.298) 0:00:16.384 **** 2026-02-04 02:57:13.208228 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:57:13.208239 | orchestrator | 2026-02-04 02:57:13.208250 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-04 02:57:13.208261 | orchestrator | Wednesday 04 February 2026 02:56:09 +0000 (0:00:29.939) 0:00:46.324 **** 2026-02-04 02:57:13.208272 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.208282 | orchestrator | 2026-02-04 02:57:13.208293 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 02:57:13.208348 | orchestrator | Wednesday 04 February 2026 02:56:23 +0000 (0:00:13.973) 0:01:00.297 **** 2026-02-04 02:57:13.208360 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:57:13.208370 | orchestrator | 2026-02-04 02:57:13.208381 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 02:57:13.208392 | orchestrator | Wednesday 04 February 2026 02:56:35 +0000 (0:00:12.141) 0:01:12.438 **** 2026-02-04 02:57:13.208423 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:57:13.208435 | orchestrator | 2026-02-04 02:57:13.208454 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-04 02:57:13.208465 | orchestrator | Wednesday 04 February 2026 02:56:36 +0000 (0:00:00.684) 0:01:13.123 **** 2026-02-04 02:57:13.208476 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:57:13.208487 | orchestrator | 2026-02-04 02:57:13.208498 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 02:57:13.208509 | orchestrator | Wednesday 04 February 2026 02:56:36 +0000 (0:00:00.474) 0:01:13.598 **** 2026-02-04 02:57:13.208520 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:57:13.208531 | orchestrator | 2026-02-04 02:57:13.208542 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-04 02:57:13.208562 | orchestrator | Wednesday 04 February 2026 02:56:37 +0000 (0:00:00.694) 0:01:14.292 **** 2026-02-04 02:57:13.208573 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:57:13.208584 | orchestrator | 2026-02-04 02:57:13.208595 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-04 02:57:13.208606 | orchestrator | Wednesday 04 February 2026 02:56:54 +0000 (0:00:17.083) 0:01:31.376 **** 2026-02-04 02:57:13.208616 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:57:13.208627 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.208638 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.208648 | orchestrator | 2026-02-04 02:57:13.208659 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-04 02:57:13.208670 | orchestrator | 2026-02-04 02:57:13.208681 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-04 02:57:13.208692 | orchestrator | Wednesday 04 February 2026 02:56:54 +0000 (0:00:00.330) 0:01:31.706 **** 2026-02-04 02:57:13.208702 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:57:13.208713 | orchestrator | 2026-02-04 02:57:13.208724 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-04 02:57:13.208735 | orchestrator | Wednesday 04 February 2026 02:56:55 +0000 (0:00:00.759) 0:01:32.466 **** 2026-02-04 02:57:13.208745 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.208756 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.208767 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.208777 | orchestrator | 2026-02-04 02:57:13.208788 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-04 02:57:13.208799 | orchestrator | Wednesday 04 February 2026 02:56:57 +0000 (0:00:02.010) 0:01:34.477 **** 2026-02-04 02:57:13.208809 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.208820 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.208830 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.208841 | orchestrator | 2026-02-04 02:57:13.208852 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-04 02:57:13.208863 | orchestrator | Wednesday 04 February 2026 02:56:59 +0000 (0:00:02.141) 0:01:36.618 **** 2026-02-04 02:57:13.208873 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:57:13.208884 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.208895 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.208905 | orchestrator | 2026-02-04 02:57:13.208916 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-04 02:57:13.208927 | orchestrator | Wednesday 04 February 2026 02:57:00 +0000 (0:00:00.523) 0:01:37.141 **** 2026-02-04 02:57:13.208937 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 02:57:13.208948 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.208959 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 02:57:13.208970 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.208981 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-04 02:57:13.208991 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-04 02:57:13.209002 | orchestrator | 2026-02-04 02:57:13.209016 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-04 02:57:13.209035 | orchestrator | Wednesday 04 February 2026 02:57:07 +0000 (0:00:07.406) 0:01:44.547 **** 2026-02-04 02:57:13.209052 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:57:13.209071 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.209090 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.209108 | orchestrator | 2026-02-04 02:57:13.209126 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-04 02:57:13.209137 | orchestrator | Wednesday 04 February 2026 02:57:08 +0000 (0:00:00.347) 0:01:44.895 **** 2026-02-04 02:57:13.209148 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-04 02:57:13.209159 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:57:13.209169 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-04 02:57:13.209188 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.209199 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-04 02:57:13.209210 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.209220 | orchestrator | 2026-02-04 02:57:13.209231 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-04 02:57:13.209242 | orchestrator | Wednesday 04 February 2026 02:57:09 +0000 (0:00:01.119) 0:01:46.014 **** 2026-02-04 02:57:13.209261 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.209278 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.209319 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.209338 | orchestrator | 2026-02-04 02:57:13.209356 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-04 02:57:13.209375 | orchestrator | Wednesday 04 February 2026 02:57:09 +0000 (0:00:00.508) 0:01:46.522 **** 2026-02-04 02:57:13.209394 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.209412 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.209424 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:57:13.209435 | orchestrator | 2026-02-04 02:57:13.209446 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-04 02:57:13.209456 | orchestrator | Wednesday 04 February 2026 02:57:10 +0000 (0:00:00.970) 0:01:47.493 **** 2026-02-04 02:57:13.209467 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:57:13.209478 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:57:13.209499 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:58:27.911862 | orchestrator | 2026-02-04 02:58:27.912026 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-04 02:58:27.912082 | orchestrator | Wednesday 04 February 2026 02:57:13 +0000 (0:00:02.412) 0:01:49.905 **** 2026-02-04 02:58:27.912097 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:27.912109 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:27.912120 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:58:27.912132 | orchestrator | 2026-02-04 02:58:27.912144 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 02:58:27.912155 | orchestrator | Wednesday 04 February 2026 02:57:33 +0000 (0:00:19.857) 0:02:09.763 **** 2026-02-04 02:58:27.912166 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:27.912177 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:27.912188 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:58:27.912199 | orchestrator | 2026-02-04 02:58:27.912210 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 02:58:27.912221 | orchestrator | Wednesday 04 February 2026 02:57:44 +0000 (0:00:11.551) 0:02:21.314 **** 2026-02-04 02:58:27.912232 | orchestrator | ok: [testbed-node-0] 2026-02-04 02:58:27.912243 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:27.912254 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:27.912265 | orchestrator | 2026-02-04 02:58:27.912276 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-04 02:58:27.912286 | orchestrator | Wednesday 04 February 2026 02:57:45 +0000 (0:00:01.066) 0:02:22.381 **** 2026-02-04 02:58:27.912326 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:27.912339 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:27.912350 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:58:27.912361 | orchestrator | 2026-02-04 02:58:27.912372 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-04 02:58:27.912383 | orchestrator | Wednesday 04 February 2026 02:57:58 +0000 (0:00:12.561) 0:02:34.942 **** 2026-02-04 02:58:27.912396 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:27.912408 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:27.912421 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:27.912435 | orchestrator | 2026-02-04 02:58:27.912448 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-04 02:58:27.912461 | orchestrator | Wednesday 04 February 2026 02:57:59 +0000 (0:00:01.093) 0:02:36.036 **** 2026-02-04 02:58:27.912501 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:27.912515 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:27.912528 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:27.912539 | orchestrator | 2026-02-04 02:58:27.912552 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-04 02:58:27.912565 | orchestrator | 2026-02-04 02:58:27.912578 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 02:58:27.912591 | orchestrator | Wednesday 04 February 2026 02:57:59 +0000 (0:00:00.325) 0:02:36.362 **** 2026-02-04 02:58:27.912603 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:58:27.912617 | orchestrator | 2026-02-04 02:58:27.912630 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-04 02:58:27.912643 | orchestrator | Wednesday 04 February 2026 02:58:00 +0000 (0:00:00.751) 0:02:37.113 **** 2026-02-04 02:58:27.912655 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-04 02:58:27.912668 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-04 02:58:27.912680 | orchestrator | 2026-02-04 02:58:27.912693 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-04 02:58:27.912706 | orchestrator | Wednesday 04 February 2026 02:58:03 +0000 (0:00:03.147) 0:02:40.261 **** 2026-02-04 02:58:27.912719 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-04 02:58:27.912781 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-04 02:58:27.912794 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-04 02:58:27.912805 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-04 02:58:27.912817 | orchestrator | 2026-02-04 02:58:27.912828 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-04 02:58:27.912838 | orchestrator | Wednesday 04 February 2026 02:58:09 +0000 (0:00:06.076) 0:02:46.337 **** 2026-02-04 02:58:27.912849 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 02:58:27.912860 | orchestrator | 2026-02-04 02:58:27.912870 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-04 02:58:27.912884 | orchestrator | Wednesday 04 February 2026 02:58:12 +0000 (0:00:03.211) 0:02:49.549 **** 2026-02-04 02:58:27.912903 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 02:58:27.912921 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-04 02:58:27.912938 | orchestrator | 2026-02-04 02:58:27.912955 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-04 02:58:27.912973 | orchestrator | Wednesday 04 February 2026 02:58:16 +0000 (0:00:03.651) 0:02:53.201 **** 2026-02-04 02:58:27.912992 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 02:58:27.913011 | orchestrator | 2026-02-04 02:58:27.913029 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-04 02:58:27.913045 | orchestrator | Wednesday 04 February 2026 02:58:19 +0000 (0:00:03.050) 0:02:56.252 **** 2026-02-04 02:58:27.913056 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-04 02:58:27.913068 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-04 02:58:27.913078 | orchestrator | 2026-02-04 02:58:27.913089 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-04 02:58:27.913127 | orchestrator | Wednesday 04 February 2026 02:58:26 +0000 (0:00:07.097) 0:03:03.349 **** 2026-02-04 02:58:27.913144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:27.913176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:27.913189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:27.913243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:32.411891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:32.411990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:32.412010 | orchestrator | 2026-02-04 02:58:32.412027 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-04 02:58:32.412043 | orchestrator | Wednesday 04 February 2026 02:58:27 +0000 (0:00:01.260) 0:03:04.610 **** 2026-02-04 02:58:32.412057 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:32.412072 | orchestrator | 2026-02-04 02:58:32.412086 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-04 02:58:32.412101 | orchestrator | Wednesday 04 February 2026 02:58:28 +0000 (0:00:00.139) 0:03:04.750 **** 2026-02-04 02:58:32.412116 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:32.412127 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:32.412135 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:32.412143 | orchestrator | 2026-02-04 02:58:32.412151 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-04 02:58:32.412159 | orchestrator | Wednesday 04 February 2026 02:58:28 +0000 (0:00:00.308) 0:03:05.058 **** 2026-02-04 02:58:32.412167 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 02:58:32.412175 | orchestrator | 2026-02-04 02:58:32.412183 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-04 02:58:32.412191 | orchestrator | Wednesday 04 February 2026 02:58:29 +0000 (0:00:00.697) 0:03:05.756 **** 2026-02-04 02:58:32.412199 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:32.412207 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:32.412221 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:32.412234 | orchestrator | 2026-02-04 02:58:32.412247 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-04 02:58:32.412260 | orchestrator | Wednesday 04 February 2026 02:58:29 +0000 (0:00:00.521) 0:03:06.278 **** 2026-02-04 02:58:32.412274 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:58:32.412288 | orchestrator | 2026-02-04 02:58:32.412326 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-04 02:58:32.412339 | orchestrator | Wednesday 04 February 2026 02:58:30 +0000 (0:00:00.579) 0:03:06.858 **** 2026-02-04 02:58:32.412358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:32.412446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:32.412468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:32.412484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:32.412498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:32.412529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:32.412545 | orchestrator | 2026-02-04 02:58:32.412568 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-04 02:58:34.189580 | orchestrator | Wednesday 04 February 2026 02:58:32 +0000 (0:00:02.236) 0:03:09.094 **** 2026-02-04 02:58:34.189710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:34.189733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:34.189747 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:34.189761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:34.189797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:34.189829 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:34.189869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:34.189883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:34.189895 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:34.189906 | orchestrator | 2026-02-04 02:58:34.189917 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-04 02:58:34.189929 | orchestrator | Wednesday 04 February 2026 02:58:33 +0000 (0:00:00.945) 0:03:10.040 **** 2026-02-04 02:58:34.189940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:34.189961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:34.189973 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:34.190006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:36.438244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:36.438391 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:36.438413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:36.438454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:36.438467 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:36.438479 | orchestrator | 2026-02-04 02:58:36.438492 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-04 02:58:36.438505 | orchestrator | Wednesday 04 February 2026 02:58:34 +0000 (0:00:00.851) 0:03:10.891 **** 2026-02-04 02:58:36.438530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:36.438563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:36.438577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:36.438598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:36.438616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:36.438636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:42.689996 | orchestrator | 2026-02-04 02:58:42.690159 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-04 02:58:42.690187 | orchestrator | Wednesday 04 February 2026 02:58:36 +0000 (0:00:02.248) 0:03:13.139 **** 2026-02-04 02:58:42.690205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:42.690245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:42.690275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:42.690410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:42.690490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:42.690514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:42.690527 | orchestrator | 2026-02-04 02:58:42.690542 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-04 02:58:42.690555 | orchestrator | Wednesday 04 February 2026 02:58:42 +0000 (0:00:05.642) 0:03:18.782 **** 2026-02-04 02:58:42.690576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:42.690592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:42.690606 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:42.690634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:46.968912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:46.969026 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:46.969047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-04 02:58:46.969079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 02:58:46.969092 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:46.969104 | orchestrator | 2026-02-04 02:58:46.969117 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-04 02:58:46.969130 | orchestrator | Wednesday 04 February 2026 02:58:42 +0000 (0:00:00.613) 0:03:19.395 **** 2026-02-04 02:58:46.969142 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:58:46.969153 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:58:46.969165 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:58:46.969177 | orchestrator | 2026-02-04 02:58:46.969188 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-04 02:58:46.969200 | orchestrator | Wednesday 04 February 2026 02:58:44 +0000 (0:00:01.491) 0:03:20.887 **** 2026-02-04 02:58:46.969212 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:58:46.969223 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:58:46.969235 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:58:46.969246 | orchestrator | 2026-02-04 02:58:46.969258 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-04 02:58:46.969269 | orchestrator | Wednesday 04 February 2026 02:58:44 +0000 (0:00:00.341) 0:03:21.228 **** 2026-02-04 02:58:46.969300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:46.969378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:46.969399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-04 02:58:46.969412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:46.969431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:58:46.969454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:26.514403 | orchestrator | 2026-02-04 02:59:26.514518 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 02:59:26.514536 | orchestrator | Wednesday 04 February 2026 02:58:46 +0000 (0:00:01.947) 0:03:23.176 **** 2026-02-04 02:59:26.514548 | orchestrator | 2026-02-04 02:59:26.514559 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 02:59:26.514570 | orchestrator | Wednesday 04 February 2026 02:58:46 +0000 (0:00:00.180) 0:03:23.357 **** 2026-02-04 02:59:26.514581 | orchestrator | 2026-02-04 02:59:26.514592 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-04 02:59:26.514604 | orchestrator | Wednesday 04 February 2026 02:58:46 +0000 (0:00:00.152) 0:03:23.509 **** 2026-02-04 02:59:26.514615 | orchestrator | 2026-02-04 02:59:26.514626 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-04 02:59:26.514637 | orchestrator | Wednesday 04 February 2026 02:58:46 +0000 (0:00:00.153) 0:03:23.663 **** 2026-02-04 02:59:26.514648 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:59:26.514660 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:59:26.514671 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:59:26.514682 | orchestrator | 2026-02-04 02:59:26.514693 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-04 02:59:26.514704 | orchestrator | Wednesday 04 February 2026 02:59:03 +0000 (0:00:16.587) 0:03:40.251 **** 2026-02-04 02:59:26.514715 | orchestrator | changed: [testbed-node-0] 2026-02-04 02:59:26.514726 | orchestrator | changed: [testbed-node-2] 2026-02-04 02:59:26.514737 | orchestrator | changed: [testbed-node-1] 2026-02-04 02:59:26.514748 | orchestrator | 2026-02-04 02:59:26.514759 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-04 02:59:26.514770 | orchestrator | 2026-02-04 02:59:26.514781 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 02:59:26.514792 | orchestrator | Wednesday 04 February 2026 02:59:13 +0000 (0:00:10.100) 0:03:50.351 **** 2026-02-04 02:59:26.514804 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:59:26.514816 | orchestrator | 2026-02-04 02:59:26.514827 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 02:59:26.514854 | orchestrator | Wednesday 04 February 2026 02:59:14 +0000 (0:00:01.190) 0:03:51.541 **** 2026-02-04 02:59:26.514866 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:59:26.514877 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:59:26.514889 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:59:26.514926 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:59:26.514939 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:59:26.514951 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:59:26.514964 | orchestrator | 2026-02-04 02:59:26.514977 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-04 02:59:26.514990 | orchestrator | Wednesday 04 February 2026 02:59:15 +0000 (0:00:00.652) 0:03:52.194 **** 2026-02-04 02:59:26.515003 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:59:26.515015 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:59:26.515028 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:59:26.515041 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:59:26.515054 | orchestrator | 2026-02-04 02:59:26.515067 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 02:59:26.515080 | orchestrator | Wednesday 04 February 2026 02:59:16 +0000 (0:00:01.093) 0:03:53.287 **** 2026-02-04 02:59:26.515093 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-04 02:59:26.515106 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-04 02:59:26.515119 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-04 02:59:26.515133 | orchestrator | 2026-02-04 02:59:26.515146 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 02:59:26.515159 | orchestrator | Wednesday 04 February 2026 02:59:17 +0000 (0:00:00.715) 0:03:54.003 **** 2026-02-04 02:59:26.515171 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-04 02:59:26.515184 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-04 02:59:26.515196 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-04 02:59:26.515208 | orchestrator | 2026-02-04 02:59:26.515222 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 02:59:26.515234 | orchestrator | Wednesday 04 February 2026 02:59:18 +0000 (0:00:01.372) 0:03:55.376 **** 2026-02-04 02:59:26.515247 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-04 02:59:26.515259 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:59:26.515270 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-04 02:59:26.515281 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:59:26.515292 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-04 02:59:26.515303 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:59:26.515314 | orchestrator | 2026-02-04 02:59:26.515370 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-04 02:59:26.515383 | orchestrator | Wednesday 04 February 2026 02:59:19 +0000 (0:00:00.576) 0:03:55.952 **** 2026-02-04 02:59:26.515415 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 02:59:26.515427 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 02:59:26.515438 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-04 02:59:26.515448 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 02:59:26.515460 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 02:59:26.515470 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:59:26.515481 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 02:59:26.515511 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 02:59:26.515523 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:59:26.515534 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 02:59:26.515544 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 02:59:26.515555 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:59:26.515566 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 02:59:26.515588 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 02:59:26.515599 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-04 02:59:26.515610 | orchestrator | 2026-02-04 02:59:26.515621 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-04 02:59:26.515632 | orchestrator | Wednesday 04 February 2026 02:59:22 +0000 (0:00:02.917) 0:03:58.870 **** 2026-02-04 02:59:26.515643 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:59:26.515654 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:59:26.515665 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:59:26.515675 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:59:26.515686 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:59:26.515697 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:59:26.515708 | orchestrator | 2026-02-04 02:59:26.515719 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-04 02:59:26.515730 | orchestrator | Wednesday 04 February 2026 02:59:23 +0000 (0:00:01.165) 0:04:00.035 **** 2026-02-04 02:59:26.515741 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:59:26.515751 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:59:26.515762 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:59:26.515773 | orchestrator | changed: [testbed-node-3] 2026-02-04 02:59:26.515784 | orchestrator | changed: [testbed-node-4] 2026-02-04 02:59:26.515794 | orchestrator | changed: [testbed-node-5] 2026-02-04 02:59:26.515805 | orchestrator | 2026-02-04 02:59:26.515816 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-04 02:59:26.515827 | orchestrator | Wednesday 04 February 2026 02:59:24 +0000 (0:00:01.340) 0:04:01.376 **** 2026-02-04 02:59:26.515847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 02:59:26.515863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 02:59:26.515882 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 02:59:28.176957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177100 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177145 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:28.177204 | orchestrator | 2026-02-04 02:59:28.177212 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 02:59:28.177220 | orchestrator | Wednesday 04 February 2026 02:59:26 +0000 (0:00:02.252) 0:04:03.629 **** 2026-02-04 02:59:28.177227 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 02:59:28.177235 | orchestrator | 2026-02-04 02:59:28.177241 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-04 02:59:28.177252 | orchestrator | Wednesday 04 February 2026 02:59:28 +0000 (0:00:01.248) 0:04:04.878 **** 2026-02-04 02:59:31.394942 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395025 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395032 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395076 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395083 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395088 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:31.395107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:32.888283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:32.888442 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:32.888479 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 02:59:32.888492 | orchestrator | 2026-02-04 02:59:32.888505 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-04 02:59:32.888517 | orchestrator | Wednesday 04 February 2026 02:59:31 +0000 (0:00:03.599) 0:04:08.477 **** 2026-02-04 02:59:32.888529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 02:59:32.888563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 02:59:32.888593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 02:59:32.888605 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:59:32.888621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 02:59:32.888633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 02:59:32.888643 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 02:59:32.888660 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:59:32.888671 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 02:59:32.888688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 02:59:34.906480 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 02:59:34.906602 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:59:34.906648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 02:59:34.906662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:59:34.906694 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:59:34.906705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 02:59:34.906715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:59:34.906725 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:59:34.906735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 02:59:34.906815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:59:34.906826 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:59:34.906834 | orchestrator | 2026-02-04 02:59:34.906844 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-04 02:59:34.906853 | orchestrator | Wednesday 04 February 2026 02:59:33 +0000 (0:00:01.577) 0:04:10.055 **** 2026-02-04 02:59:34.906867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 02:59:34.906884 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 02:59:34.906894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 02:59:34.906903 | orchestrator | skipping: [testbed-node-3] 2026-02-04 02:59:34.906911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 02:59:34.906929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 02:59:39.233781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 02:59:39.233914 | orchestrator | skipping: [testbed-node-4] 2026-02-04 02:59:39.233939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 02:59:39.233989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 02:59:39.234011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 02:59:39.234105 | orchestrator | skipping: [testbed-node-5] 2026-02-04 02:59:39.234128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 02:59:39.234243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 02:59:39.234272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:59:39.234297 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:59:39.234311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:59:39.234323 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:59:39.234406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 02:59:39.234420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 02:59:39.234433 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:59:39.234446 | orchestrator | 2026-02-04 02:59:39.234460 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 02:59:39.234475 | orchestrator | Wednesday 04 February 2026 02:59:35 +0000 (0:00:02.105) 0:04:12.160 **** 2026-02-04 02:59:39.234487 | orchestrator | skipping: [testbed-node-0] 2026-02-04 02:59:39.234500 | orchestrator | skipping: [testbed-node-1] 2026-02-04 02:59:39.234513 | orchestrator | skipping: [testbed-node-2] 2026-02-04 02:59:39.234526 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 02:59:39.234539 | orchestrator | 2026-02-04 02:59:39.234551 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-04 02:59:39.234564 | orchestrator | Wednesday 04 February 2026 02:59:36 +0000 (0:00:01.142) 0:04:13.303 **** 2026-02-04 02:59:39.234577 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 02:59:39.234590 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 02:59:39.234601 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 02:59:39.234613 | orchestrator | 2026-02-04 02:59:39.234624 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-04 02:59:39.234635 | orchestrator | Wednesday 04 February 2026 02:59:37 +0000 (0:00:00.894) 0:04:14.198 **** 2026-02-04 02:59:39.234646 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 02:59:39.234661 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 02:59:39.234680 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 02:59:39.234698 | orchestrator | 2026-02-04 02:59:39.234716 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-04 02:59:39.234734 | orchestrator | Wednesday 04 February 2026 02:59:38 +0000 (0:00:01.200) 0:04:15.398 **** 2026-02-04 02:59:39.234764 | orchestrator | ok: [testbed-node-3] 2026-02-04 02:59:39.234782 | orchestrator | ok: [testbed-node-4] 2026-02-04 02:59:39.234799 | orchestrator | ok: [testbed-node-5] 2026-02-04 02:59:39.234817 | orchestrator | 2026-02-04 02:59:39.234849 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-04 03:00:00.289044 | orchestrator | Wednesday 04 February 2026 02:59:39 +0000 (0:00:00.533) 0:04:15.931 **** 2026-02-04 03:00:00.289182 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:00:00.289201 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:00:00.289213 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:00:00.289225 | orchestrator | 2026-02-04 03:00:00.289237 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-04 03:00:00.289250 | orchestrator | Wednesday 04 February 2026 02:59:39 +0000 (0:00:00.510) 0:04:16.441 **** 2026-02-04 03:00:00.289261 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 03:00:00.289279 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 03:00:00.289306 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 03:00:00.289328 | orchestrator | 2026-02-04 03:00:00.289407 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-04 03:00:00.289425 | orchestrator | Wednesday 04 February 2026 02:59:40 +0000 (0:00:01.194) 0:04:17.636 **** 2026-02-04 03:00:00.289476 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 03:00:00.289497 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 03:00:00.289517 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 03:00:00.289538 | orchestrator | 2026-02-04 03:00:00.289558 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-04 03:00:00.289599 | orchestrator | Wednesday 04 February 2026 02:59:42 +0000 (0:00:01.435) 0:04:19.071 **** 2026-02-04 03:00:00.289633 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-04 03:00:00.289653 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-04 03:00:00.289671 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-04 03:00:00.289689 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-04 03:00:00.289706 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-04 03:00:00.289724 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-04 03:00:00.289744 | orchestrator | 2026-02-04 03:00:00.289762 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-04 03:00:00.289781 | orchestrator | Wednesday 04 February 2026 02:59:46 +0000 (0:00:03.665) 0:04:22.736 **** 2026-02-04 03:00:00.289801 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:00.289821 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:00.289844 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:00.289871 | orchestrator | 2026-02-04 03:00:00.289889 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-04 03:00:00.289906 | orchestrator | Wednesday 04 February 2026 02:59:46 +0000 (0:00:00.306) 0:04:23.042 **** 2026-02-04 03:00:00.289924 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:00.289940 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:00.289959 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:00.289976 | orchestrator | 2026-02-04 03:00:00.289997 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-04 03:00:00.290082 | orchestrator | Wednesday 04 February 2026 02:59:46 +0000 (0:00:00.305) 0:04:23.348 **** 2026-02-04 03:00:00.290097 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:00:00.290109 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:00:00.290120 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:00:00.290131 | orchestrator | 2026-02-04 03:00:00.290143 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-04 03:00:00.290154 | orchestrator | Wednesday 04 February 2026 02:59:48 +0000 (0:00:01.512) 0:04:24.860 **** 2026-02-04 03:00:00.290167 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 03:00:00.290205 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 03:00:00.290216 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-04 03:00:00.290228 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 03:00:00.290239 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 03:00:00.290251 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-04 03:00:00.290262 | orchestrator | 2026-02-04 03:00:00.290273 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-04 03:00:00.290284 | orchestrator | Wednesday 04 February 2026 02:59:51 +0000 (0:00:03.295) 0:04:28.156 **** 2026-02-04 03:00:00.290295 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 03:00:00.290306 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 03:00:00.290317 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 03:00:00.290328 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-04 03:00:00.290359 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:00:00.290370 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-04 03:00:00.290381 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:00:00.290392 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-04 03:00:00.290403 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:00:00.290414 | orchestrator | 2026-02-04 03:00:00.290425 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-04 03:00:00.290436 | orchestrator | Wednesday 04 February 2026 02:59:54 +0000 (0:00:03.304) 0:04:31.461 **** 2026-02-04 03:00:00.290447 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:00.290458 | orchestrator | 2026-02-04 03:00:00.290494 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-04 03:00:00.290507 | orchestrator | Wednesday 04 February 2026 02:59:54 +0000 (0:00:00.149) 0:04:31.610 **** 2026-02-04 03:00:00.290518 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:00.290529 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:00.290540 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:00.290551 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:00.290561 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:00.290572 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:00.290583 | orchestrator | 2026-02-04 03:00:00.290594 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-04 03:00:00.290661 | orchestrator | Wednesday 04 February 2026 02:59:55 +0000 (0:00:00.839) 0:04:32.450 **** 2026-02-04 03:00:00.290672 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 03:00:00.290683 | orchestrator | 2026-02-04 03:00:00.290694 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-04 03:00:00.290705 | orchestrator | Wednesday 04 February 2026 02:59:56 +0000 (0:00:00.694) 0:04:33.144 **** 2026-02-04 03:00:00.290725 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:00.290736 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:00.290747 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:00.290758 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:00.290769 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:00.290779 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:00.290790 | orchestrator | 2026-02-04 03:00:00.290801 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-04 03:00:00.290812 | orchestrator | Wednesday 04 February 2026 02:59:57 +0000 (0:00:00.613) 0:04:33.757 **** 2026-02-04 03:00:00.290842 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 03:00:00.290858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 03:00:00.290870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 03:00:00.290892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:04.696851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:04.696984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697017 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697028 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:04.697263 | orchestrator | 2026-02-04 03:00:04.697277 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-04 03:00:04.697290 | orchestrator | Wednesday 04 February 2026 03:00:00 +0000 (0:00:03.456) 0:04:37.213 **** 2026-02-04 03:00:04.697311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 03:00:06.640990 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 03:00:06.641134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 03:00:06.641153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 03:00:06.641166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 03:00:06.641177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 03:00:06.641208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:06.641237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:06.641250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:06.641262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:06.641273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:06.641286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:06.641306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:25.234864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:25.235002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:25.235022 | orchestrator | 2026-02-04 03:00:25.235037 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-04 03:00:25.235050 | orchestrator | Wednesday 04 February 2026 03:00:06 +0000 (0:00:06.131) 0:04:43.344 **** 2026-02-04 03:00:25.235061 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:25.235074 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:25.235084 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:25.235095 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:25.235106 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:25.235116 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:25.235127 | orchestrator | 2026-02-04 03:00:25.235138 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-04 03:00:25.235149 | orchestrator | Wednesday 04 February 2026 03:00:08 +0000 (0:00:01.605) 0:04:44.950 **** 2026-02-04 03:00:25.235160 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 03:00:25.235171 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 03:00:25.235182 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 03:00:25.235193 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-04 03:00:25.235204 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 03:00:25.235215 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-04 03:00:25.235226 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 03:00:25.235237 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:25.235248 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 03:00:25.235259 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:25.235270 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-04 03:00:25.235281 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:25.235292 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 03:00:25.235303 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 03:00:25.235370 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-04 03:00:25.235386 | orchestrator | 2026-02-04 03:00:25.235400 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-04 03:00:25.235414 | orchestrator | Wednesday 04 February 2026 03:00:12 +0000 (0:00:03.877) 0:04:48.828 **** 2026-02-04 03:00:25.235427 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:25.235440 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:25.235452 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:25.235465 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:25.235478 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:25.235492 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:25.235505 | orchestrator | 2026-02-04 03:00:25.235518 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-04 03:00:25.235531 | orchestrator | Wednesday 04 February 2026 03:00:12 +0000 (0:00:00.636) 0:04:49.465 **** 2026-02-04 03:00:25.235544 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 03:00:25.235558 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 03:00:25.235571 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 03:00:25.235584 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-04 03:00:25.235613 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 03:00:25.235625 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 03:00:25.235643 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-04 03:00:25.235655 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 03:00:25.235665 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-04 03:00:25.235676 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 03:00:25.235687 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:25.235698 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 03:00:25.235709 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-04 03:00:25.235719 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:25.235730 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:25.235741 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 03:00:25.235752 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 03:00:25.235762 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-04 03:00:25.235773 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 03:00:25.235784 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 03:00:25.235795 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-04 03:00:25.235805 | orchestrator | 2026-02-04 03:00:25.235816 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-04 03:00:25.235827 | orchestrator | Wednesday 04 February 2026 03:00:18 +0000 (0:00:05.621) 0:04:55.086 **** 2026-02-04 03:00:25.235848 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 03:00:25.235859 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 03:00:25.235870 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 03:00:25.235880 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 03:00:25.235891 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 03:00:25.235902 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 03:00:25.235913 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 03:00:25.235924 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-04 03:00:25.235934 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-04 03:00:25.235945 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 03:00:25.235956 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 03:00:25.235967 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 03:00:25.235977 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 03:00:25.235988 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 03:00:25.235998 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:25.236009 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 03:00:25.236020 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:25.236031 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-04 03:00:25.236042 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:25.236053 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 03:00:25.236063 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-04 03:00:25.236074 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 03:00:25.236085 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 03:00:25.236096 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-04 03:00:25.236107 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 03:00:25.236123 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 03:00:30.152630 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-04 03:00:30.152781 | orchestrator | 2026-02-04 03:00:30.152806 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-04 03:00:30.152849 | orchestrator | Wednesday 04 February 2026 03:00:25 +0000 (0:00:06.831) 0:05:01.917 **** 2026-02-04 03:00:30.152865 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:30.152876 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:30.152886 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:30.152896 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:30.152906 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:30.152916 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:30.152926 | orchestrator | 2026-02-04 03:00:30.152936 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-04 03:00:30.152946 | orchestrator | Wednesday 04 February 2026 03:00:25 +0000 (0:00:00.792) 0:05:02.710 **** 2026-02-04 03:00:30.152956 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:30.152991 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:30.153001 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:30.153011 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:30.153020 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:30.153030 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:30.153040 | orchestrator | 2026-02-04 03:00:30.153050 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-04 03:00:30.153060 | orchestrator | Wednesday 04 February 2026 03:00:26 +0000 (0:00:00.630) 0:05:03.341 **** 2026-02-04 03:00:30.153070 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:30.153081 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:30.153098 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:00:30.153115 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:00:30.153131 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:30.153148 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:00:30.153164 | orchestrator | 2026-02-04 03:00:30.153180 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-04 03:00:30.153196 | orchestrator | Wednesday 04 February 2026 03:00:28 +0000 (0:00:02.217) 0:05:05.558 **** 2026-02-04 03:00:30.153219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 03:00:30.153244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 03:00:30.153266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 03:00:30.153280 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:30.153322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 03:00:30.153373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 03:00:30.153387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 03:00:30.153398 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:30.153449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-04 03:00:30.153463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-04 03:00:30.153484 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-04 03:00:33.443718 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:33.443815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 03:00:33.443830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:00:33.443838 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:33.443847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 03:00:33.443854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:00:33.443861 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:33.443868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-04 03:00:33.443875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:00:33.443904 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:33.443912 | orchestrator | 2026-02-04 03:00:33.443920 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-04 03:00:33.443929 | orchestrator | Wednesday 04 February 2026 03:00:30 +0000 (0:00:01.398) 0:05:06.956 **** 2026-02-04 03:00:33.443937 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-04 03:00:33.443958 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-04 03:00:33.443971 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:33.443976 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-04 03:00:33.443980 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-04 03:00:33.443984 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:33.443989 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-04 03:00:33.443993 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-04 03:00:33.443997 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:33.444001 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-04 03:00:33.444005 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-04 03:00:33.444009 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:33.444013 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-04 03:00:33.444018 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-04 03:00:33.444022 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:33.444026 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-04 03:00:33.444032 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-04 03:00:33.444038 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:33.444045 | orchestrator | 2026-02-04 03:00:33.444052 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-04 03:00:33.444059 | orchestrator | Wednesday 04 February 2026 03:00:31 +0000 (0:00:00.909) 0:05:07.866 **** 2026-02-04 03:00:33.444068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 03:00:33.444076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 03:00:33.444090 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-04 03:00:33.444107 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 03:00:35.609917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610231 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610279 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610336 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-04 03:00:35.610401 | orchestrator | 2026-02-04 03:00:35.610414 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-04 03:00:35.610427 | orchestrator | Wednesday 04 February 2026 03:00:33 +0000 (0:00:02.558) 0:05:10.425 **** 2026-02-04 03:00:35.610438 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:00:35.610450 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:00:35.610461 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:00:35.610471 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:00:35.610482 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:00:35.610493 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:00:35.610504 | orchestrator | 2026-02-04 03:00:35.610514 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 03:00:35.610525 | orchestrator | Wednesday 04 February 2026 03:00:34 +0000 (0:00:00.819) 0:05:11.245 **** 2026-02-04 03:00:35.610536 | orchestrator | 2026-02-04 03:00:35.610547 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 03:00:35.610558 | orchestrator | Wednesday 04 February 2026 03:00:34 +0000 (0:00:00.145) 0:05:11.390 **** 2026-02-04 03:00:35.610568 | orchestrator | 2026-02-04 03:00:35.610580 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 03:00:35.610596 | orchestrator | Wednesday 04 February 2026 03:00:34 +0000 (0:00:00.143) 0:05:11.533 **** 2026-02-04 03:00:35.610607 | orchestrator | 2026-02-04 03:00:35.610619 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 03:00:35.610637 | orchestrator | Wednesday 04 February 2026 03:00:34 +0000 (0:00:00.146) 0:05:11.680 **** 2026-02-04 03:03:41.400926 | orchestrator | 2026-02-04 03:03:41.401028 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 03:03:41.401040 | orchestrator | Wednesday 04 February 2026 03:00:35 +0000 (0:00:00.145) 0:05:11.825 **** 2026-02-04 03:03:41.401048 | orchestrator | 2026-02-04 03:03:41.401054 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-04 03:03:41.401061 | orchestrator | Wednesday 04 February 2026 03:00:35 +0000 (0:00:00.310) 0:05:12.136 **** 2026-02-04 03:03:41.401067 | orchestrator | 2026-02-04 03:03:41.401074 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-04 03:03:41.401082 | orchestrator | Wednesday 04 February 2026 03:00:35 +0000 (0:00:00.146) 0:05:12.283 **** 2026-02-04 03:03:41.401088 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:03:41.401096 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:03:41.401102 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:03:41.401109 | orchestrator | 2026-02-04 03:03:41.401116 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-04 03:03:41.401122 | orchestrator | Wednesday 04 February 2026 03:00:42 +0000 (0:00:06.573) 0:05:18.856 **** 2026-02-04 03:03:41.401129 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:03:41.401136 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:03:41.401142 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:03:41.401149 | orchestrator | 2026-02-04 03:03:41.401156 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-04 03:03:41.401187 | orchestrator | Wednesday 04 February 2026 03:00:58 +0000 (0:00:16.844) 0:05:35.700 **** 2026-02-04 03:03:41.401194 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:03:41.401200 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:03:41.401207 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:03:41.401214 | orchestrator | 2026-02-04 03:03:41.401220 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-04 03:03:41.401226 | orchestrator | Wednesday 04 February 2026 03:01:19 +0000 (0:00:20.618) 0:05:56.318 **** 2026-02-04 03:03:41.401232 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:03:41.401239 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:03:41.401246 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:03:41.401252 | orchestrator | 2026-02-04 03:03:41.401259 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-04 03:03:41.401266 | orchestrator | Wednesday 04 February 2026 03:01:57 +0000 (0:00:37.905) 0:06:34.224 **** 2026-02-04 03:03:41.401273 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-04 03:03:41.401281 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2026-02-04 03:03:41.401289 | orchestrator | FAILED - RETRYING: [testbed-node-5]: Checking libvirt container is ready (10 retries left). 2026-02-04 03:03:41.401295 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:03:41.401302 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:03:41.401309 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:03:41.401316 | orchestrator | 2026-02-04 03:03:41.401322 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-04 03:03:41.401329 | orchestrator | Wednesday 04 February 2026 03:02:03 +0000 (0:00:06.154) 0:06:40.378 **** 2026-02-04 03:03:41.401335 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:03:41.401341 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:03:41.401348 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:03:41.401354 | orchestrator | 2026-02-04 03:03:41.401361 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-04 03:03:41.401369 | orchestrator | Wednesday 04 February 2026 03:02:04 +0000 (0:00:00.793) 0:06:41.171 **** 2026-02-04 03:03:41.401375 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:03:41.401381 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:03:41.401388 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:03:41.401413 | orchestrator | 2026-02-04 03:03:41.401420 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-04 03:03:41.401427 | orchestrator | Wednesday 04 February 2026 03:02:33 +0000 (0:00:29.473) 0:07:10.645 **** 2026-02-04 03:03:41.401433 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:03:41.401439 | orchestrator | 2026-02-04 03:03:41.401446 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-04 03:03:41.401452 | orchestrator | Wednesday 04 February 2026 03:02:34 +0000 (0:00:00.324) 0:07:10.970 **** 2026-02-04 03:03:41.401458 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:03:41.401465 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:03:41.401471 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:03:41.401478 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:03:41.401484 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:03:41.401491 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-04 03:03:41.401516 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 03:03:41.401523 | orchestrator | 2026-02-04 03:03:41.401530 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-04 03:03:41.401537 | orchestrator | Wednesday 04 February 2026 03:02:56 +0000 (0:00:22.448) 0:07:33.419 **** 2026-02-04 03:03:41.401544 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:03:41.401550 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:03:41.401557 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:03:41.401571 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:03:41.401577 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:03:41.401584 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:03:41.401590 | orchestrator | 2026-02-04 03:03:41.401597 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-04 03:03:41.401604 | orchestrator | Wednesday 04 February 2026 03:03:05 +0000 (0:00:08.928) 0:07:42.347 **** 2026-02-04 03:03:41.401625 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:03:41.401633 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:03:41.401640 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:03:41.401646 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:03:41.401653 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:03:41.401677 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-02-04 03:03:41.401684 | orchestrator | 2026-02-04 03:03:41.401691 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-04 03:03:41.401698 | orchestrator | Wednesday 04 February 2026 03:03:09 +0000 (0:00:03.831) 0:07:46.179 **** 2026-02-04 03:03:41.401704 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 03:03:41.401710 | orchestrator | 2026-02-04 03:03:41.401716 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-04 03:03:41.401723 | orchestrator | Wednesday 04 February 2026 03:03:22 +0000 (0:00:13.283) 0:07:59.462 **** 2026-02-04 03:03:41.401729 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 03:03:41.401735 | orchestrator | 2026-02-04 03:03:41.401742 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-04 03:03:41.401748 | orchestrator | Wednesday 04 February 2026 03:03:24 +0000 (0:00:01.610) 0:08:01.073 **** 2026-02-04 03:03:41.401754 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:03:41.401761 | orchestrator | 2026-02-04 03:03:41.401766 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-04 03:03:41.401773 | orchestrator | Wednesday 04 February 2026 03:03:26 +0000 (0:00:01.645) 0:08:02.718 **** 2026-02-04 03:03:41.401779 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 03:03:41.401786 | orchestrator | 2026-02-04 03:03:41.401792 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-04 03:03:41.401798 | orchestrator | Wednesday 04 February 2026 03:03:37 +0000 (0:00:11.392) 0:08:14.111 **** 2026-02-04 03:03:41.401805 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:03:41.401812 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:03:41.401818 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:03:41.401824 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:03:41.401830 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:03:41.401836 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:03:41.401842 | orchestrator | 2026-02-04 03:03:41.401849 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-04 03:03:41.401855 | orchestrator | 2026-02-04 03:03:41.401862 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-04 03:03:41.401868 | orchestrator | Wednesday 04 February 2026 03:03:39 +0000 (0:00:01.749) 0:08:15.860 **** 2026-02-04 03:03:41.401874 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:03:41.401880 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:03:41.401886 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:03:41.401892 | orchestrator | 2026-02-04 03:03:41.401898 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-04 03:03:41.401905 | orchestrator | 2026-02-04 03:03:41.401911 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-04 03:03:41.401917 | orchestrator | Wednesday 04 February 2026 03:03:40 +0000 (0:00:00.916) 0:08:16.777 **** 2026-02-04 03:03:41.401923 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:03:41.401929 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:03:41.401936 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:03:41.401947 | orchestrator | 2026-02-04 03:03:41.401954 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-04 03:03:41.401960 | orchestrator | 2026-02-04 03:03:41.401966 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-04 03:03:41.401972 | orchestrator | Wednesday 04 February 2026 03:03:40 +0000 (0:00:00.749) 0:08:17.527 **** 2026-02-04 03:03:41.401979 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-04 03:03:41.401985 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-04 03:03:41.401992 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-04 03:03:41.401999 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-04 03:03:41.402005 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-04 03:03:41.402011 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-04 03:03:41.402064 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:03:41.402071 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-04 03:03:41.402078 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-04 03:03:41.402084 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-04 03:03:41.402091 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-04 03:03:41.402098 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-04 03:03:41.402105 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-04 03:03:41.402111 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:03:41.402117 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-04 03:03:41.402124 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-04 03:03:41.402131 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-04 03:03:41.402137 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-04 03:03:41.402144 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-04 03:03:41.402150 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-04 03:03:41.402157 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:03:41.402163 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-04 03:03:41.402170 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-04 03:03:41.402176 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-04 03:03:41.402183 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-04 03:03:41.402194 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-04 03:03:41.402201 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-04 03:03:41.402207 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-04 03:03:41.402214 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-04 03:03:41.402226 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-04 03:03:44.496052 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-04 03:03:44.496146 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-04 03:03:44.496159 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-04 03:03:44.496170 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:03:44.496180 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:03:44.496189 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-04 03:03:44.496199 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-04 03:03:44.496208 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-04 03:03:44.496277 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-04 03:03:44.496287 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-04 03:03:44.496296 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-04 03:03:44.496328 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:03:44.496337 | orchestrator | 2026-02-04 03:03:44.496347 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-04 03:03:44.496356 | orchestrator | 2026-02-04 03:03:44.496365 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-04 03:03:44.496375 | orchestrator | Wednesday 04 February 2026 03:03:42 +0000 (0:00:01.350) 0:08:18.877 **** 2026-02-04 03:03:44.496383 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-04 03:03:44.496432 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-04 03:03:44.496442 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:03:44.496451 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-04 03:03:44.496460 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-04 03:03:44.496469 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:03:44.496478 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-04 03:03:44.496487 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-04 03:03:44.496495 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:03:44.496504 | orchestrator | 2026-02-04 03:03:44.496513 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-04 03:03:44.496522 | orchestrator | 2026-02-04 03:03:44.496531 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-04 03:03:44.496539 | orchestrator | Wednesday 04 February 2026 03:03:42 +0000 (0:00:00.550) 0:08:19.428 **** 2026-02-04 03:03:44.496548 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:03:44.496557 | orchestrator | 2026-02-04 03:03:44.496566 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-04 03:03:44.496574 | orchestrator | 2026-02-04 03:03:44.496583 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-04 03:03:44.496592 | orchestrator | Wednesday 04 February 2026 03:03:43 +0000 (0:00:00.708) 0:08:20.137 **** 2026-02-04 03:03:44.496601 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:03:44.496612 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:03:44.496622 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:03:44.496632 | orchestrator | 2026-02-04 03:03:44.496643 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:03:44.496653 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:03:44.496666 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-04 03:03:44.496677 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-04 03:03:44.496688 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-04 03:03:44.496736 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-04 03:03:44.496769 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-04 03:03:44.496782 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-04 03:03:44.496792 | orchestrator | 2026-02-04 03:03:44.496802 | orchestrator | 2026-02-04 03:03:44.496812 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:03:44.496823 | orchestrator | Wednesday 04 February 2026 03:03:44 +0000 (0:00:00.642) 0:08:20.780 **** 2026-02-04 03:03:44.496834 | orchestrator | =============================================================================== 2026-02-04 03:03:44.496851 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.91s 2026-02-04 03:03:44.496862 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.94s 2026-02-04 03:03:44.496872 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.47s 2026-02-04 03:03:44.496897 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.45s 2026-02-04 03:03:44.496906 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.62s 2026-02-04 03:03:44.496915 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.86s 2026-02-04 03:03:44.496924 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.08s 2026-02-04 03:03:44.496954 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.84s 2026-02-04 03:03:44.496968 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 16.59s 2026-02-04 03:03:44.496977 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.97s 2026-02-04 03:03:44.496986 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.28s 2026-02-04 03:03:44.496995 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.56s 2026-02-04 03:03:44.497003 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.14s 2026-02-04 03:03:44.497012 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.55s 2026-02-04 03:03:44.497021 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.39s 2026-02-04 03:03:44.497030 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.10s 2026-02-04 03:03:44.497038 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.93s 2026-02-04 03:03:44.497047 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.41s 2026-02-04 03:03:44.497056 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.10s 2026-02-04 03:03:44.497065 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 6.83s 2026-02-04 03:03:46.877563 | orchestrator | 2026-02-04 03:03:46 | INFO  | Task f523083f-8960-416a-baeb-448504a954ad (horizon) was prepared for execution. 2026-02-04 03:03:46.877691 | orchestrator | 2026-02-04 03:03:46 | INFO  | It takes a moment until task f523083f-8960-416a-baeb-448504a954ad (horizon) has been started and output is visible here. 2026-02-04 03:03:54.130365 | orchestrator | 2026-02-04 03:03:54.130546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:03:54.130565 | orchestrator | 2026-02-04 03:03:54.130577 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:03:54.130589 | orchestrator | Wednesday 04 February 2026 03:03:51 +0000 (0:00:00.259) 0:00:00.259 **** 2026-02-04 03:03:54.130601 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:03:54.130614 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:03:54.130625 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:03:54.130636 | orchestrator | 2026-02-04 03:03:54.130648 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:03:54.130659 | orchestrator | Wednesday 04 February 2026 03:03:51 +0000 (0:00:00.362) 0:00:00.621 **** 2026-02-04 03:03:54.130670 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-04 03:03:54.130682 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-04 03:03:54.130706 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-04 03:03:54.130717 | orchestrator | 2026-02-04 03:03:54.130728 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-04 03:03:54.130740 | orchestrator | 2026-02-04 03:03:54.130751 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 03:03:54.130762 | orchestrator | Wednesday 04 February 2026 03:03:51 +0000 (0:00:00.448) 0:00:01.070 **** 2026-02-04 03:03:54.130795 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:03:54.130808 | orchestrator | 2026-02-04 03:03:54.130818 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-04 03:03:54.130829 | orchestrator | Wednesday 04 February 2026 03:03:52 +0000 (0:00:00.555) 0:00:01.626 **** 2026-02-04 03:03:54.130863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:03:54.130910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:03:54.130957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:03:54.130973 | orchestrator | 2026-02-04 03:03:54.130987 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-04 03:03:54.131000 | orchestrator | Wednesday 04 February 2026 03:03:53 +0000 (0:00:01.145) 0:00:02.771 **** 2026-02-04 03:03:54.131013 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:03:54.131026 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:03:54.131036 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:03:54.131047 | orchestrator | 2026-02-04 03:03:54.131058 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 03:03:54.131070 | orchestrator | Wednesday 04 February 2026 03:03:54 +0000 (0:00:00.467) 0:00:03.239 **** 2026-02-04 03:03:54.131088 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 03:04:00.269098 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 03:04:00.269243 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 03:04:00.269259 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 03:04:00.269271 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 03:04:00.269307 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 03:04:00.269319 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-04 03:04:00.269330 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 03:04:00.269341 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 03:04:00.269352 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 03:04:00.269362 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 03:04:00.269374 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 03:04:00.269384 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 03:04:00.269395 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 03:04:00.269438 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-04 03:04:00.269450 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 03:04:00.269460 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-04 03:04:00.269471 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-04 03:04:00.269482 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-04 03:04:00.269492 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-04 03:04:00.269503 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-04 03:04:00.269514 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-04 03:04:00.269524 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-04 03:04:00.269535 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-04 03:04:00.269550 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-04 03:04:00.269566 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-04 03:04:00.269595 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-04 03:04:00.269610 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-04 03:04:00.269622 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-04 03:04:00.269636 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-04 03:04:00.269650 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-04 03:04:00.269664 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-04 03:04:00.269677 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-04 03:04:00.269691 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-04 03:04:00.269713 | orchestrator | 2026-02-04 03:04:00.269727 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:00.269741 | orchestrator | Wednesday 04 February 2026 03:03:54 +0000 (0:00:00.805) 0:00:04.045 **** 2026-02-04 03:04:00.269755 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:00.269769 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:00.269783 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:00.269795 | orchestrator | 2026-02-04 03:04:00.269808 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:00.269821 | orchestrator | Wednesday 04 February 2026 03:03:55 +0000 (0:00:00.305) 0:00:04.350 **** 2026-02-04 03:04:00.269834 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.269849 | orchestrator | 2026-02-04 03:04:00.269881 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:00.269895 | orchestrator | Wednesday 04 February 2026 03:03:55 +0000 (0:00:00.305) 0:00:04.655 **** 2026-02-04 03:04:00.269908 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.269919 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:00.269930 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:00.269941 | orchestrator | 2026-02-04 03:04:00.269952 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:00.269963 | orchestrator | Wednesday 04 February 2026 03:03:55 +0000 (0:00:00.320) 0:00:04.975 **** 2026-02-04 03:04:00.269974 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:00.269985 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:00.269996 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:00.270007 | orchestrator | 2026-02-04 03:04:00.270079 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:00.270091 | orchestrator | Wednesday 04 February 2026 03:03:56 +0000 (0:00:00.319) 0:00:05.295 **** 2026-02-04 03:04:00.270102 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270113 | orchestrator | 2026-02-04 03:04:00.270125 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:00.270136 | orchestrator | Wednesday 04 February 2026 03:03:56 +0000 (0:00:00.153) 0:00:05.449 **** 2026-02-04 03:04:00.270148 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270159 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:00.270170 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:00.270181 | orchestrator | 2026-02-04 03:04:00.270192 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:00.270203 | orchestrator | Wednesday 04 February 2026 03:03:56 +0000 (0:00:00.317) 0:00:05.766 **** 2026-02-04 03:04:00.270214 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:00.270224 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:00.270235 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:00.270246 | orchestrator | 2026-02-04 03:04:00.270257 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:00.270268 | orchestrator | Wednesday 04 February 2026 03:03:57 +0000 (0:00:00.564) 0:00:06.331 **** 2026-02-04 03:04:00.270279 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270290 | orchestrator | 2026-02-04 03:04:00.270301 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:00.270312 | orchestrator | Wednesday 04 February 2026 03:03:57 +0000 (0:00:00.142) 0:00:06.473 **** 2026-02-04 03:04:00.270322 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270333 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:00.270344 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:00.270355 | orchestrator | 2026-02-04 03:04:00.270366 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:00.270377 | orchestrator | Wednesday 04 February 2026 03:03:57 +0000 (0:00:00.305) 0:00:06.779 **** 2026-02-04 03:04:00.270388 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:00.270433 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:00.270446 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:00.270465 | orchestrator | 2026-02-04 03:04:00.270477 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:00.270488 | orchestrator | Wednesday 04 February 2026 03:03:57 +0000 (0:00:00.319) 0:00:07.099 **** 2026-02-04 03:04:00.270498 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270509 | orchestrator | 2026-02-04 03:04:00.270520 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:00.270531 | orchestrator | Wednesday 04 February 2026 03:03:58 +0000 (0:00:00.134) 0:00:07.233 **** 2026-02-04 03:04:00.270541 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270558 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:00.270569 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:00.270580 | orchestrator | 2026-02-04 03:04:00.270591 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:00.270602 | orchestrator | Wednesday 04 February 2026 03:03:58 +0000 (0:00:00.485) 0:00:07.719 **** 2026-02-04 03:04:00.270613 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:00.270623 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:00.270634 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:00.270645 | orchestrator | 2026-02-04 03:04:00.270656 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:00.270667 | orchestrator | Wednesday 04 February 2026 03:03:58 +0000 (0:00:00.331) 0:00:08.051 **** 2026-02-04 03:04:00.270677 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270688 | orchestrator | 2026-02-04 03:04:00.270699 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:00.270710 | orchestrator | Wednesday 04 February 2026 03:03:58 +0000 (0:00:00.136) 0:00:08.187 **** 2026-02-04 03:04:00.270720 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270731 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:00.270742 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:00.270753 | orchestrator | 2026-02-04 03:04:00.270764 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:00.270774 | orchestrator | Wednesday 04 February 2026 03:03:59 +0000 (0:00:00.324) 0:00:08.512 **** 2026-02-04 03:04:00.270785 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:00.270796 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:00.270807 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:00.270817 | orchestrator | 2026-02-04 03:04:00.270828 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:00.270839 | orchestrator | Wednesday 04 February 2026 03:03:59 +0000 (0:00:00.322) 0:00:08.834 **** 2026-02-04 03:04:00.270850 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270860 | orchestrator | 2026-02-04 03:04:00.270871 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:00.270882 | orchestrator | Wednesday 04 February 2026 03:03:59 +0000 (0:00:00.135) 0:00:08.970 **** 2026-02-04 03:04:00.270893 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:00.270903 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:00.270914 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:00.270925 | orchestrator | 2026-02-04 03:04:00.270936 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:00.270954 | orchestrator | Wednesday 04 February 2026 03:04:00 +0000 (0:00:00.513) 0:00:09.484 **** 2026-02-04 03:04:13.918390 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:13.918525 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:13.918532 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:13.918536 | orchestrator | 2026-02-04 03:04:13.918541 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:13.918547 | orchestrator | Wednesday 04 February 2026 03:04:00 +0000 (0:00:00.352) 0:00:09.836 **** 2026-02-04 03:04:13.918551 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918556 | orchestrator | 2026-02-04 03:04:13.918560 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:13.918580 | orchestrator | Wednesday 04 February 2026 03:04:00 +0000 (0:00:00.131) 0:00:09.967 **** 2026-02-04 03:04:13.918584 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918588 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:13.918592 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:13.918596 | orchestrator | 2026-02-04 03:04:13.918600 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:13.918604 | orchestrator | Wednesday 04 February 2026 03:04:01 +0000 (0:00:00.293) 0:00:10.261 **** 2026-02-04 03:04:13.918608 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:13.918611 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:13.918615 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:13.918619 | orchestrator | 2026-02-04 03:04:13.918623 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:13.918627 | orchestrator | Wednesday 04 February 2026 03:04:01 +0000 (0:00:00.505) 0:00:10.767 **** 2026-02-04 03:04:13.918630 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918634 | orchestrator | 2026-02-04 03:04:13.918638 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:13.918642 | orchestrator | Wednesday 04 February 2026 03:04:01 +0000 (0:00:00.148) 0:00:10.915 **** 2026-02-04 03:04:13.918646 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918649 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:13.918653 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:13.918657 | orchestrator | 2026-02-04 03:04:13.918661 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:13.918665 | orchestrator | Wednesday 04 February 2026 03:04:01 +0000 (0:00:00.309) 0:00:11.225 **** 2026-02-04 03:04:13.918668 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:13.918672 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:13.918676 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:13.918680 | orchestrator | 2026-02-04 03:04:13.918684 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:13.918687 | orchestrator | Wednesday 04 February 2026 03:04:02 +0000 (0:00:00.336) 0:00:11.561 **** 2026-02-04 03:04:13.918691 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918695 | orchestrator | 2026-02-04 03:04:13.918699 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:13.918703 | orchestrator | Wednesday 04 February 2026 03:04:02 +0000 (0:00:00.131) 0:00:11.692 **** 2026-02-04 03:04:13.918706 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918710 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:13.918714 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:13.918718 | orchestrator | 2026-02-04 03:04:13.918721 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-04 03:04:13.918725 | orchestrator | Wednesday 04 February 2026 03:04:02 +0000 (0:00:00.500) 0:00:12.192 **** 2026-02-04 03:04:13.918729 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:04:13.918733 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:04:13.918737 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:04:13.918740 | orchestrator | 2026-02-04 03:04:13.918744 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-04 03:04:13.918757 | orchestrator | Wednesday 04 February 2026 03:04:03 +0000 (0:00:00.331) 0:00:12.524 **** 2026-02-04 03:04:13.918761 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918765 | orchestrator | 2026-02-04 03:04:13.918769 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-04 03:04:13.918773 | orchestrator | Wednesday 04 February 2026 03:04:03 +0000 (0:00:00.148) 0:00:12.673 **** 2026-02-04 03:04:13.918776 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918780 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:13.918784 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:13.918788 | orchestrator | 2026-02-04 03:04:13.918792 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-04 03:04:13.918796 | orchestrator | Wednesday 04 February 2026 03:04:03 +0000 (0:00:00.315) 0:00:12.989 **** 2026-02-04 03:04:13.918803 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:04:13.918807 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:04:13.918811 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:04:13.918815 | orchestrator | 2026-02-04 03:04:13.918819 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-04 03:04:13.918822 | orchestrator | Wednesday 04 February 2026 03:04:05 +0000 (0:00:01.581) 0:00:14.571 **** 2026-02-04 03:04:13.918826 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 03:04:13.918831 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 03:04:13.918835 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-04 03:04:13.918839 | orchestrator | 2026-02-04 03:04:13.918842 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-04 03:04:13.918846 | orchestrator | Wednesday 04 February 2026 03:04:07 +0000 (0:00:02.065) 0:00:16.636 **** 2026-02-04 03:04:13.918850 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 03:04:13.918855 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 03:04:13.918859 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-04 03:04:13.918863 | orchestrator | 2026-02-04 03:04:13.918867 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-04 03:04:13.918879 | orchestrator | Wednesday 04 February 2026 03:04:09 +0000 (0:00:01.858) 0:00:18.495 **** 2026-02-04 03:04:13.918883 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 03:04:13.918887 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 03:04:13.918891 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-04 03:04:13.918895 | orchestrator | 2026-02-04 03:04:13.918899 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-04 03:04:13.918903 | orchestrator | Wednesday 04 February 2026 03:04:10 +0000 (0:00:01.511) 0:00:20.006 **** 2026-02-04 03:04:13.918908 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918912 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:13.918917 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:13.918922 | orchestrator | 2026-02-04 03:04:13.918926 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-04 03:04:13.918930 | orchestrator | Wednesday 04 February 2026 03:04:11 +0000 (0:00:00.315) 0:00:20.321 **** 2026-02-04 03:04:13.918935 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:13.918939 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:13.918944 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:13.918948 | orchestrator | 2026-02-04 03:04:13.918953 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 03:04:13.918958 | orchestrator | Wednesday 04 February 2026 03:04:11 +0000 (0:00:00.494) 0:00:20.815 **** 2026-02-04 03:04:13.918962 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:04:13.918967 | orchestrator | 2026-02-04 03:04:13.918971 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-04 03:04:13.918976 | orchestrator | Wednesday 04 February 2026 03:04:12 +0000 (0:00:00.627) 0:00:21.443 **** 2026-02-04 03:04:13.918988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:04:13.919004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:04:14.790161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:04:14.790347 | orchestrator | 2026-02-04 03:04:14.790367 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-04 03:04:14.790381 | orchestrator | Wednesday 04 February 2026 03:04:13 +0000 (0:00:01.684) 0:00:23.127 **** 2026-02-04 03:04:14.790449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 03:04:14.790477 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:14.790501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 03:04:14.790515 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:14.790539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 03:04:17.074397 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:17.074646 | orchestrator | 2026-02-04 03:04:17.074669 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-04 03:04:17.074682 | orchestrator | Wednesday 04 February 2026 03:04:14 +0000 (0:00:00.876) 0:00:24.003 **** 2026-02-04 03:04:17.074700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 03:04:17.074717 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:17.074843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 03:04:17.074910 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:17.074967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 03:04:17.074991 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:17.075009 | orchestrator | 2026-02-04 03:04:17.075028 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-04 03:04:17.075058 | orchestrator | Wednesday 04 February 2026 03:04:15 +0000 (0:00:00.858) 0:00:24.862 **** 2026-02-04 03:04:17.075109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:04:59.033921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:04:59.034173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 03:04:59.034199 | orchestrator | 2026-02-04 03:04:59.034214 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 03:04:59.034227 | orchestrator | Wednesday 04 February 2026 03:04:17 +0000 (0:00:01.431) 0:00:26.293 **** 2026-02-04 03:04:59.034238 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:04:59.034250 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:04:59.034261 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:04:59.034272 | orchestrator | 2026-02-04 03:04:59.034284 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-04 03:04:59.034294 | orchestrator | Wednesday 04 February 2026 03:04:17 +0000 (0:00:00.525) 0:00:26.818 **** 2026-02-04 03:04:59.034306 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:04:59.034317 | orchestrator | 2026-02-04 03:04:59.034328 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-04 03:04:59.034339 | orchestrator | Wednesday 04 February 2026 03:04:18 +0000 (0:00:00.558) 0:00:27.377 **** 2026-02-04 03:04:59.034349 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:04:59.034360 | orchestrator | 2026-02-04 03:04:59.034371 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-04 03:04:59.034382 | orchestrator | Wednesday 04 February 2026 03:04:20 +0000 (0:00:02.255) 0:00:29.633 **** 2026-02-04 03:04:59.034402 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:04:59.034416 | orchestrator | 2026-02-04 03:04:59.034463 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-04 03:04:59.034476 | orchestrator | Wednesday 04 February 2026 03:04:22 +0000 (0:00:02.187) 0:00:31.820 **** 2026-02-04 03:04:59.034489 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:04:59.034503 | orchestrator | 2026-02-04 03:04:59.034515 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 03:04:59.034529 | orchestrator | Wednesday 04 February 2026 03:04:38 +0000 (0:00:16.101) 0:00:47.922 **** 2026-02-04 03:04:59.034548 | orchestrator | 2026-02-04 03:04:59.034575 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 03:04:59.034595 | orchestrator | Wednesday 04 February 2026 03:04:38 +0000 (0:00:00.230) 0:00:48.152 **** 2026-02-04 03:04:59.034612 | orchestrator | 2026-02-04 03:04:59.034629 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-04 03:04:59.034646 | orchestrator | Wednesday 04 February 2026 03:04:39 +0000 (0:00:00.076) 0:00:48.229 **** 2026-02-04 03:04:59.034661 | orchestrator | 2026-02-04 03:04:59.034678 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-04 03:04:59.034695 | orchestrator | Wednesday 04 February 2026 03:04:39 +0000 (0:00:00.076) 0:00:48.305 **** 2026-02-04 03:04:59.034713 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:04:59.034732 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:04:59.034750 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:04:59.034769 | orchestrator | 2026-02-04 03:04:59.034786 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:04:59.034805 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 03:04:59.034825 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-04 03:04:59.034845 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-04 03:04:59.034863 | orchestrator | 2026-02-04 03:04:59.034882 | orchestrator | 2026-02-04 03:04:59.034894 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:04:59.034905 | orchestrator | Wednesday 04 February 2026 03:04:59 +0000 (0:00:19.925) 0:01:08.230 **** 2026-02-04 03:04:59.034916 | orchestrator | =============================================================================== 2026-02-04 03:04:59.034927 | orchestrator | horizon : Restart horizon container ------------------------------------ 19.93s 2026-02-04 03:04:59.034937 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.10s 2026-02-04 03:04:59.034957 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.26s 2026-02-04 03:04:59.034968 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.19s 2026-02-04 03:04:59.034979 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.07s 2026-02-04 03:04:59.034990 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.86s 2026-02-04 03:04:59.035001 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.68s 2026-02-04 03:04:59.035012 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.58s 2026-02-04 03:04:59.035022 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2026-02-04 03:04:59.035033 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.43s 2026-02-04 03:04:59.035044 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.15s 2026-02-04 03:04:59.035055 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.88s 2026-02-04 03:04:59.035066 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.86s 2026-02-04 03:04:59.035099 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-02-04 03:04:59.408713 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2026-02-04 03:04:59.408842 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2026-02-04 03:04:59.408868 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-02-04 03:04:59.408888 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2026-02-04 03:04:59.408907 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2026-02-04 03:04:59.408926 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2026-02-04 03:05:01.762987 | orchestrator | 2026-02-04 03:05:01 | INFO  | Task 88dd8f83-c28e-487a-bb2f-cbad206c676e (skyline) was prepared for execution. 2026-02-04 03:05:01.763093 | orchestrator | 2026-02-04 03:05:01 | INFO  | It takes a moment until task 88dd8f83-c28e-487a-bb2f-cbad206c676e (skyline) has been started and output is visible here. 2026-02-04 03:05:31.324059 | orchestrator | 2026-02-04 03:05:31.324243 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:05:31.324261 | orchestrator | 2026-02-04 03:05:31.324326 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:05:31.324347 | orchestrator | Wednesday 04 February 2026 03:05:06 +0000 (0:00:00.291) 0:00:00.291 **** 2026-02-04 03:05:31.324396 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:05:31.324415 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:05:31.324479 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:05:31.324501 | orchestrator | 2026-02-04 03:05:31.324520 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:05:31.324540 | orchestrator | Wednesday 04 February 2026 03:05:06 +0000 (0:00:00.329) 0:00:00.620 **** 2026-02-04 03:05:31.324559 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-04 03:05:31.324573 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-04 03:05:31.324586 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-04 03:05:31.324600 | orchestrator | 2026-02-04 03:05:31.324613 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-04 03:05:31.324626 | orchestrator | 2026-02-04 03:05:31.324640 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-04 03:05:31.324653 | orchestrator | Wednesday 04 February 2026 03:05:06 +0000 (0:00:00.451) 0:00:01.072 **** 2026-02-04 03:05:31.324666 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:05:31.324680 | orchestrator | 2026-02-04 03:05:31.324694 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-04 03:05:31.324707 | orchestrator | Wednesday 04 February 2026 03:05:07 +0000 (0:00:00.545) 0:00:01.617 **** 2026-02-04 03:05:31.324720 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-04 03:05:31.324733 | orchestrator | 2026-02-04 03:05:31.324747 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-04 03:05:31.324760 | orchestrator | Wednesday 04 February 2026 03:05:10 +0000 (0:00:03.252) 0:00:04.870 **** 2026-02-04 03:05:31.324774 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-04 03:05:31.324788 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-04 03:05:31.324802 | orchestrator | 2026-02-04 03:05:31.324815 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-04 03:05:31.324828 | orchestrator | Wednesday 04 February 2026 03:05:16 +0000 (0:00:06.069) 0:00:10.940 **** 2026-02-04 03:05:31.324842 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:05:31.324855 | orchestrator | 2026-02-04 03:05:31.324866 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-04 03:05:31.324909 | orchestrator | Wednesday 04 February 2026 03:05:19 +0000 (0:00:02.969) 0:00:13.909 **** 2026-02-04 03:05:31.324921 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:05:31.324933 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-04 03:05:31.324944 | orchestrator | 2026-02-04 03:05:31.324955 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-04 03:05:31.324966 | orchestrator | Wednesday 04 February 2026 03:05:23 +0000 (0:00:03.813) 0:00:17.723 **** 2026-02-04 03:05:31.324993 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:05:31.325005 | orchestrator | 2026-02-04 03:05:31.325016 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-04 03:05:31.325027 | orchestrator | Wednesday 04 February 2026 03:05:26 +0000 (0:00:03.037) 0:00:20.761 **** 2026-02-04 03:05:31.325038 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-04 03:05:31.325049 | orchestrator | 2026-02-04 03:05:31.325060 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-04 03:05:31.325071 | orchestrator | Wednesday 04 February 2026 03:05:30 +0000 (0:00:03.520) 0:00:24.282 **** 2026-02-04 03:05:31.325086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:31.325124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:31.325137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:31.325165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:31.325202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:31.325235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:34.979209 | orchestrator | 2026-02-04 03:05:34.979316 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-04 03:05:34.979332 | orchestrator | Wednesday 04 February 2026 03:05:31 +0000 (0:00:01.269) 0:00:25.552 **** 2026-02-04 03:05:34.979344 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:05:34.979356 | orchestrator | 2026-02-04 03:05:34.979367 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-04 03:05:34.979379 | orchestrator | Wednesday 04 February 2026 03:05:32 +0000 (0:00:00.725) 0:00:26.277 **** 2026-02-04 03:05:34.979392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:34.979481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:34.979498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:34.979529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:34.979544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:34.979563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:34.979574 | orchestrator | 2026-02-04 03:05:34.979586 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-04 03:05:34.979603 | orchestrator | Wednesday 04 February 2026 03:05:34 +0000 (0:00:02.322) 0:00:28.600 **** 2026-02-04 03:05:34.979615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 03:05:34.979627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 03:05:34.979638 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:05:34.979659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314433 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:05:36.314518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314537 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:05:36.314545 | orchestrator | 2026-02-04 03:05:36.314554 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-04 03:05:36.314563 | orchestrator | Wednesday 04 February 2026 03:05:34 +0000 (0:00:00.612) 0:00:29.213 **** 2026-02-04 03:05:36.314571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314620 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:05:36.314632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314649 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:05:36.314656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-04 03:05:36.314723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-04 03:05:44.535225 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:05:44.535338 | orchestrator | 2026-02-04 03:05:44.535359 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-04 03:05:44.535380 | orchestrator | Wednesday 04 February 2026 03:05:36 +0000 (0:00:01.332) 0:00:30.545 **** 2026-02-04 03:05:44.535496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:44.535520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:44.535532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:44.535569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:44.535609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:44.535622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:44.535633 | orchestrator | 2026-02-04 03:05:44.535645 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-04 03:05:44.535656 | orchestrator | Wednesday 04 February 2026 03:05:38 +0000 (0:00:02.352) 0:00:32.897 **** 2026-02-04 03:05:44.535668 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-04 03:05:44.535678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-04 03:05:44.535689 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-04 03:05:44.535700 | orchestrator | 2026-02-04 03:05:44.535711 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-04 03:05:44.535731 | orchestrator | Wednesday 04 February 2026 03:05:40 +0000 (0:00:01.497) 0:00:34.395 **** 2026-02-04 03:05:44.535741 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-04 03:05:44.535754 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-04 03:05:44.535766 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-04 03:05:44.535778 | orchestrator | 2026-02-04 03:05:44.535791 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-04 03:05:44.535803 | orchestrator | Wednesday 04 February 2026 03:05:42 +0000 (0:00:02.008) 0:00:36.404 **** 2026-02-04 03:05:44.535817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:44.535842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653343 | orchestrator | 2026-02-04 03:05:46.653356 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-04 03:05:46.653369 | orchestrator | Wednesday 04 February 2026 03:05:44 +0000 (0:00:02.365) 0:00:38.769 **** 2026-02-04 03:05:46.653380 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:05:46.653392 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:05:46.653403 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:05:46.653414 | orchestrator | 2026-02-04 03:05:46.653497 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-04 03:05:46.653519 | orchestrator | Wednesday 04 February 2026 03:05:44 +0000 (0:00:00.317) 0:00:39.087 **** 2026-02-04 03:05:46.653532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:05:46.653612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:06:24.017544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-04 03:06:24.017738 | orchestrator | 2026-02-04 03:06:24.017773 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-04 03:06:24.017796 | orchestrator | Wednesday 04 February 2026 03:05:46 +0000 (0:00:01.800) 0:00:40.888 **** 2026-02-04 03:06:24.017817 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:06:24.017839 | orchestrator | 2026-02-04 03:06:24.017860 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-04 03:06:24.017879 | orchestrator | Wednesday 04 February 2026 03:05:48 +0000 (0:00:02.104) 0:00:42.992 **** 2026-02-04 03:06:24.017898 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:06:24.017916 | orchestrator | 2026-02-04 03:06:24.017935 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-04 03:06:24.017951 | orchestrator | Wednesday 04 February 2026 03:05:50 +0000 (0:00:02.134) 0:00:45.127 **** 2026-02-04 03:06:24.017968 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:06:24.017985 | orchestrator | 2026-02-04 03:06:24.018002 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-04 03:06:24.018112 | orchestrator | Wednesday 04 February 2026 03:05:58 +0000 (0:00:07.212) 0:00:52.340 **** 2026-02-04 03:06:24.018130 | orchestrator | 2026-02-04 03:06:24.018145 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-04 03:06:24.018161 | orchestrator | Wednesday 04 February 2026 03:05:58 +0000 (0:00:00.070) 0:00:52.411 **** 2026-02-04 03:06:24.018176 | orchestrator | 2026-02-04 03:06:24.018191 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-04 03:06:24.018206 | orchestrator | Wednesday 04 February 2026 03:05:58 +0000 (0:00:00.071) 0:00:52.482 **** 2026-02-04 03:06:24.018222 | orchestrator | 2026-02-04 03:06:24.018238 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-04 03:06:24.018253 | orchestrator | Wednesday 04 February 2026 03:05:58 +0000 (0:00:00.073) 0:00:52.556 **** 2026-02-04 03:06:24.018269 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:06:24.018284 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:06:24.018299 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:06:24.018314 | orchestrator | 2026-02-04 03:06:24.018329 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-04 03:06:24.018345 | orchestrator | Wednesday 04 February 2026 03:06:09 +0000 (0:00:11.145) 0:01:03.701 **** 2026-02-04 03:06:24.018361 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:06:24.018376 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:06:24.018391 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:06:24.018407 | orchestrator | 2026-02-04 03:06:24.018423 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:06:24.018439 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 03:06:24.018485 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 03:06:24.018501 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 03:06:24.018533 | orchestrator | 2026-02-04 03:06:24.018548 | orchestrator | 2026-02-04 03:06:24.018563 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:06:24.018577 | orchestrator | Wednesday 04 February 2026 03:06:23 +0000 (0:00:14.137) 0:01:17.839 **** 2026-02-04 03:06:24.018592 | orchestrator | =============================================================================== 2026-02-04 03:06:24.018626 | orchestrator | skyline : Restart skyline-console container ---------------------------- 14.14s 2026-02-04 03:06:24.018641 | orchestrator | skyline : Restart skyline-apiserver container -------------------------- 11.15s 2026-02-04 03:06:24.018655 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.21s 2026-02-04 03:06:24.018669 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.07s 2026-02-04 03:06:24.018684 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.81s 2026-02-04 03:06:24.018699 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.52s 2026-02-04 03:06:24.018713 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.25s 2026-02-04 03:06:24.018728 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.04s 2026-02-04 03:06:24.018771 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 2.97s 2026-02-04 03:06:24.018789 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.37s 2026-02-04 03:06:24.018805 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.35s 2026-02-04 03:06:24.018821 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.32s 2026-02-04 03:06:24.018835 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.14s 2026-02-04 03:06:24.018850 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.10s 2026-02-04 03:06:24.018864 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.01s 2026-02-04 03:06:24.018879 | orchestrator | skyline : Check skyline container --------------------------------------- 1.80s 2026-02-04 03:06:24.018893 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.50s 2026-02-04 03:06:24.018908 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.33s 2026-02-04 03:06:24.018922 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.27s 2026-02-04 03:06:24.018937 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.73s 2026-02-04 03:06:26.426381 | orchestrator | 2026-02-04 03:06:26 | INFO  | Task e9c98a64-1579-4709-aeea-bfb9129ff3bd (glance) was prepared for execution. 2026-02-04 03:06:26.426532 | orchestrator | 2026-02-04 03:06:26 | INFO  | It takes a moment until task e9c98a64-1579-4709-aeea-bfb9129ff3bd (glance) has been started and output is visible here. 2026-02-04 03:06:58.919155 | orchestrator | 2026-02-04 03:06:58.919275 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:06:58.919292 | orchestrator | 2026-02-04 03:06:58.919304 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:06:58.919315 | orchestrator | Wednesday 04 February 2026 03:06:30 +0000 (0:00:00.262) 0:00:00.262 **** 2026-02-04 03:06:58.919327 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:06:58.919339 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:06:58.919350 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:06:58.919361 | orchestrator | 2026-02-04 03:06:58.919372 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:06:58.919384 | orchestrator | Wednesday 04 February 2026 03:06:30 +0000 (0:00:00.328) 0:00:00.590 **** 2026-02-04 03:06:58.919395 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-04 03:06:58.919407 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-04 03:06:58.919418 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-04 03:06:58.919453 | orchestrator | 2026-02-04 03:06:58.919465 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-04 03:06:58.919524 | orchestrator | 2026-02-04 03:06:58.919536 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 03:06:58.919547 | orchestrator | Wednesday 04 February 2026 03:06:31 +0000 (0:00:00.483) 0:00:01.073 **** 2026-02-04 03:06:58.919558 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:06:58.919570 | orchestrator | 2026-02-04 03:06:58.919581 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-04 03:06:58.919592 | orchestrator | Wednesday 04 February 2026 03:06:31 +0000 (0:00:00.543) 0:00:01.617 **** 2026-02-04 03:06:58.919603 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-04 03:06:58.919614 | orchestrator | 2026-02-04 03:06:58.919624 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-04 03:06:58.919635 | orchestrator | Wednesday 04 February 2026 03:06:35 +0000 (0:00:03.288) 0:00:04.905 **** 2026-02-04 03:06:58.919646 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-04 03:06:58.919657 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-04 03:06:58.919668 | orchestrator | 2026-02-04 03:06:58.919680 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-04 03:06:58.919693 | orchestrator | Wednesday 04 February 2026 03:06:41 +0000 (0:00:06.073) 0:00:10.979 **** 2026-02-04 03:06:58.919705 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:06:58.919720 | orchestrator | 2026-02-04 03:06:58.919733 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-04 03:06:58.919746 | orchestrator | Wednesday 04 February 2026 03:06:44 +0000 (0:00:03.025) 0:00:14.005 **** 2026-02-04 03:06:58.919758 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:06:58.919771 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-04 03:06:58.919783 | orchestrator | 2026-02-04 03:06:58.919796 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-04 03:06:58.919825 | orchestrator | Wednesday 04 February 2026 03:06:48 +0000 (0:00:03.919) 0:00:17.924 **** 2026-02-04 03:06:58.919837 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:06:58.919850 | orchestrator | 2026-02-04 03:06:58.919863 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-04 03:06:58.919875 | orchestrator | Wednesday 04 February 2026 03:06:51 +0000 (0:00:03.046) 0:00:20.970 **** 2026-02-04 03:06:58.919888 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-04 03:06:58.919900 | orchestrator | 2026-02-04 03:06:58.919912 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-04 03:06:58.919926 | orchestrator | Wednesday 04 February 2026 03:06:54 +0000 (0:00:03.534) 0:00:24.505 **** 2026-02-04 03:06:58.919964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:06:58.920001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:06:58.920032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:06:58.920079 | orchestrator | 2026-02-04 03:06:58.920098 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 03:06:58.920117 | orchestrator | Wednesday 04 February 2026 03:06:58 +0000 (0:00:03.337) 0:00:27.842 **** 2026-02-04 03:06:58.920135 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:06:58.920153 | orchestrator | 2026-02-04 03:06:58.920181 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-04 03:07:13.906766 | orchestrator | Wednesday 04 February 2026 03:06:58 +0000 (0:00:00.745) 0:00:28.588 **** 2026-02-04 03:07:13.906894 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:07:13.906920 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:07:13.906939 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:07:13.906957 | orchestrator | 2026-02-04 03:07:13.906977 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-04 03:07:13.906996 | orchestrator | Wednesday 04 February 2026 03:07:02 +0000 (0:00:03.461) 0:00:32.049 **** 2026-02-04 03:07:13.907014 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:07:13.907035 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:07:13.907055 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:07:13.907073 | orchestrator | 2026-02-04 03:07:13.907094 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-04 03:07:13.907106 | orchestrator | Wednesday 04 February 2026 03:07:03 +0000 (0:00:01.530) 0:00:33.580 **** 2026-02-04 03:07:13.907118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:07:13.907129 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:07:13.907140 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:07:13.907151 | orchestrator | 2026-02-04 03:07:13.907162 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-04 03:07:13.907173 | orchestrator | Wednesday 04 February 2026 03:07:05 +0000 (0:00:01.384) 0:00:34.964 **** 2026-02-04 03:07:13.907184 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:07:13.907196 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:07:13.907207 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:07:13.907218 | orchestrator | 2026-02-04 03:07:13.907230 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-04 03:07:13.907241 | orchestrator | Wednesday 04 February 2026 03:07:05 +0000 (0:00:00.673) 0:00:35.637 **** 2026-02-04 03:07:13.907252 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:13.907263 | orchestrator | 2026-02-04 03:07:13.907275 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-04 03:07:13.907286 | orchestrator | Wednesday 04 February 2026 03:07:06 +0000 (0:00:00.126) 0:00:35.764 **** 2026-02-04 03:07:13.907297 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:13.907310 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:13.907323 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:13.907337 | orchestrator | 2026-02-04 03:07:13.907350 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 03:07:13.907362 | orchestrator | Wednesday 04 February 2026 03:07:06 +0000 (0:00:00.324) 0:00:36.089 **** 2026-02-04 03:07:13.907393 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:07:13.907407 | orchestrator | 2026-02-04 03:07:13.907420 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-04 03:07:13.907433 | orchestrator | Wednesday 04 February 2026 03:07:07 +0000 (0:00:00.763) 0:00:36.853 **** 2026-02-04 03:07:13.907502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:07:13.907546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:07:13.907569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:07:13.907593 | orchestrator | 2026-02-04 03:07:13.907608 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-04 03:07:13.907621 | orchestrator | Wednesday 04 February 2026 03:07:10 +0000 (0:00:03.742) 0:00:40.595 **** 2026-02-04 03:07:13.907646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 03:07:17.386210 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:17.386352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 03:07:17.386403 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:17.386426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 03:07:17.386444 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:17.386462 | orchestrator | 2026-02-04 03:07:17.386598 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-04 03:07:17.386624 | orchestrator | Wednesday 04 February 2026 03:07:13 +0000 (0:00:02.983) 0:00:43.578 **** 2026-02-04 03:07:17.386687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 03:07:17.386727 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:17.386751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 03:07:17.386773 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:17.386812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 03:07:51.143659 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:51.143771 | orchestrator | 2026-02-04 03:07:51.143787 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-04 03:07:51.143799 | orchestrator | Wednesday 04 February 2026 03:07:17 +0000 (0:00:03.478) 0:00:47.057 **** 2026-02-04 03:07:51.143809 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:51.143820 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:51.143830 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:51.143840 | orchestrator | 2026-02-04 03:07:51.143866 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-04 03:07:51.143876 | orchestrator | Wednesday 04 February 2026 03:07:20 +0000 (0:00:03.210) 0:00:50.268 **** 2026-02-04 03:07:51.143890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:07:51.143905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:07:51.143963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:07:51.143976 | orchestrator | 2026-02-04 03:07:51.143986 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-04 03:07:51.143996 | orchestrator | Wednesday 04 February 2026 03:07:24 +0000 (0:00:03.823) 0:00:54.091 **** 2026-02-04 03:07:51.144006 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:07:51.144016 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:07:51.144026 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:07:51.144036 | orchestrator | 2026-02-04 03:07:51.144046 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-04 03:07:51.144055 | orchestrator | Wednesday 04 February 2026 03:07:29 +0000 (0:00:05.426) 0:00:59.518 **** 2026-02-04 03:07:51.144065 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:51.144075 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:51.144085 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:51.144095 | orchestrator | 2026-02-04 03:07:51.144104 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-04 03:07:51.144114 | orchestrator | Wednesday 04 February 2026 03:07:33 +0000 (0:00:03.442) 0:01:02.960 **** 2026-02-04 03:07:51.144124 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:51.144133 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:51.144143 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:51.144153 | orchestrator | 2026-02-04 03:07:51.144163 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-04 03:07:51.144172 | orchestrator | Wednesday 04 February 2026 03:07:36 +0000 (0:00:03.407) 0:01:06.368 **** 2026-02-04 03:07:51.144182 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:51.144192 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:51.144203 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:51.144216 | orchestrator | 2026-02-04 03:07:51.144229 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-04 03:07:51.144240 | orchestrator | Wednesday 04 February 2026 03:07:39 +0000 (0:00:03.210) 0:01:09.579 **** 2026-02-04 03:07:51.144251 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:51.144262 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:51.144274 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:51.144293 | orchestrator | 2026-02-04 03:07:51.144305 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-04 03:07:51.144317 | orchestrator | Wednesday 04 February 2026 03:07:43 +0000 (0:00:03.357) 0:01:12.937 **** 2026-02-04 03:07:51.144328 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:51.144340 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:51.144351 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:51.144363 | orchestrator | 2026-02-04 03:07:51.144375 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-04 03:07:51.144387 | orchestrator | Wednesday 04 February 2026 03:07:43 +0000 (0:00:00.525) 0:01:13.462 **** 2026-02-04 03:07:51.144398 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 03:07:51.144410 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:07:51.144422 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 03:07:51.144433 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:07:51.144444 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-04 03:07:51.144455 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:07:51.144466 | orchestrator | 2026-02-04 03:07:51.144478 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-04 03:07:51.144527 | orchestrator | Wednesday 04 February 2026 03:07:46 +0000 (0:00:03.194) 0:01:16.656 **** 2026-02-04 03:07:51.144542 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:07:51.144553 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:07:51.144564 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:07:51.144574 | orchestrator | 2026-02-04 03:07:51.144584 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-04 03:07:51.144600 | orchestrator | Wednesday 04 February 2026 03:07:51 +0000 (0:00:04.154) 0:01:20.811 **** 2026-02-04 03:08:58.442756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:08:58.442883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:08:58.442956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 03:08:58.442973 | orchestrator | 2026-02-04 03:08:58.442987 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-04 03:08:58.443000 | orchestrator | Wednesday 04 February 2026 03:07:54 +0000 (0:00:03.678) 0:01:24.490 **** 2026-02-04 03:08:58.443011 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:08:58.443024 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:08:58.443035 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:08:58.443046 | orchestrator | 2026-02-04 03:08:58.443057 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-04 03:08:58.443068 | orchestrator | Wednesday 04 February 2026 03:07:55 +0000 (0:00:00.518) 0:01:25.008 **** 2026-02-04 03:08:58.443079 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:08:58.443090 | orchestrator | 2026-02-04 03:08:58.443102 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-04 03:08:58.443113 | orchestrator | Wednesday 04 February 2026 03:07:57 +0000 (0:00:02.017) 0:01:27.026 **** 2026-02-04 03:08:58.443132 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:08:58.443143 | orchestrator | 2026-02-04 03:08:58.443154 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-04 03:08:58.443165 | orchestrator | Wednesday 04 February 2026 03:07:59 +0000 (0:00:02.318) 0:01:29.345 **** 2026-02-04 03:08:58.443175 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:08:58.443186 | orchestrator | 2026-02-04 03:08:58.443197 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-04 03:08:58.443208 | orchestrator | Wednesday 04 February 2026 03:08:01 +0000 (0:00:02.018) 0:01:31.363 **** 2026-02-04 03:08:58.443219 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:08:58.443229 | orchestrator | 2026-02-04 03:08:58.443240 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-04 03:08:58.443251 | orchestrator | Wednesday 04 February 2026 03:08:27 +0000 (0:00:26.286) 0:01:57.650 **** 2026-02-04 03:08:58.443265 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:08:58.443277 | orchestrator | 2026-02-04 03:08:58.443290 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 03:08:58.443303 | orchestrator | Wednesday 04 February 2026 03:08:29 +0000 (0:00:01.979) 0:01:59.629 **** 2026-02-04 03:08:58.443316 | orchestrator | 2026-02-04 03:08:58.443329 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 03:08:58.443342 | orchestrator | Wednesday 04 February 2026 03:08:30 +0000 (0:00:00.068) 0:01:59.698 **** 2026-02-04 03:08:58.443355 | orchestrator | 2026-02-04 03:08:58.443369 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-04 03:08:58.443381 | orchestrator | Wednesday 04 February 2026 03:08:30 +0000 (0:00:00.070) 0:01:59.768 **** 2026-02-04 03:08:58.443394 | orchestrator | 2026-02-04 03:08:58.443407 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-04 03:08:58.443419 | orchestrator | Wednesday 04 February 2026 03:08:30 +0000 (0:00:00.068) 0:01:59.837 **** 2026-02-04 03:08:58.443431 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:08:58.443444 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:08:58.443457 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:08:58.443470 | orchestrator | 2026-02-04 03:08:58.443483 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:08:58.443496 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 03:08:58.443510 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 03:08:58.443555 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 03:08:58.443569 | orchestrator | 2026-02-04 03:08:58.443581 | orchestrator | 2026-02-04 03:08:58.443596 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:08:58.443609 | orchestrator | Wednesday 04 February 2026 03:08:58 +0000 (0:00:28.263) 0:02:28.101 **** 2026-02-04 03:08:58.443621 | orchestrator | =============================================================================== 2026-02-04 03:08:58.443632 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.26s 2026-02-04 03:08:58.443643 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.29s 2026-02-04 03:08:58.443654 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.07s 2026-02-04 03:08:58.443672 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.43s 2026-02-04 03:08:58.791824 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.15s 2026-02-04 03:08:58.791936 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.92s 2026-02-04 03:08:58.791961 | orchestrator | glance : Copying over config.json files for services -------------------- 3.82s 2026-02-04 03:08:58.792022 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.74s 2026-02-04 03:08:58.792035 | orchestrator | glance : Check glance containers ---------------------------------------- 3.68s 2026-02-04 03:08:58.792046 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.53s 2026-02-04 03:08:58.792057 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.48s 2026-02-04 03:08:58.792068 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.46s 2026-02-04 03:08:58.792079 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.44s 2026-02-04 03:08:58.792091 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.41s 2026-02-04 03:08:58.792102 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.36s 2026-02-04 03:08:58.792113 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.34s 2026-02-04 03:08:58.792123 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.29s 2026-02-04 03:08:58.792135 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.21s 2026-02-04 03:08:58.792146 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.21s 2026-02-04 03:08:58.792157 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.19s 2026-02-04 03:09:01.152721 | orchestrator | 2026-02-04 03:09:01 | INFO  | Task 777fd075-242e-42d3-9398-dde5c632d0c5 (cinder) was prepared for execution. 2026-02-04 03:09:01.152802 | orchestrator | 2026-02-04 03:09:01 | INFO  | It takes a moment until task 777fd075-242e-42d3-9398-dde5c632d0c5 (cinder) has been started and output is visible here. 2026-02-04 03:09:34.494209 | orchestrator | 2026-02-04 03:09:34.494329 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:09:34.494347 | orchestrator | 2026-02-04 03:09:34.494360 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:09:34.494371 | orchestrator | Wednesday 04 February 2026 03:09:05 +0000 (0:00:00.281) 0:00:00.281 **** 2026-02-04 03:09:34.494383 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:09:34.494396 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:09:34.494407 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:09:34.494418 | orchestrator | 2026-02-04 03:09:34.494430 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:09:34.494441 | orchestrator | Wednesday 04 February 2026 03:09:05 +0000 (0:00:00.305) 0:00:00.586 **** 2026-02-04 03:09:34.494453 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-04 03:09:34.494464 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-04 03:09:34.494476 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-04 03:09:34.494487 | orchestrator | 2026-02-04 03:09:34.494498 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-04 03:09:34.494509 | orchestrator | 2026-02-04 03:09:34.494520 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 03:09:34.494531 | orchestrator | Wednesday 04 February 2026 03:09:06 +0000 (0:00:00.441) 0:00:01.027 **** 2026-02-04 03:09:34.494592 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:09:34.494605 | orchestrator | 2026-02-04 03:09:34.494616 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-04 03:09:34.494628 | orchestrator | Wednesday 04 February 2026 03:09:06 +0000 (0:00:00.548) 0:00:01.576 **** 2026-02-04 03:09:34.494641 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-04 03:09:34.494652 | orchestrator | 2026-02-04 03:09:34.494663 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-04 03:09:34.494674 | orchestrator | Wednesday 04 February 2026 03:09:09 +0000 (0:00:03.051) 0:00:04.627 **** 2026-02-04 03:09:34.494685 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-04 03:09:34.494724 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-04 03:09:34.494736 | orchestrator | 2026-02-04 03:09:34.494749 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-04 03:09:34.494763 | orchestrator | Wednesday 04 February 2026 03:09:15 +0000 (0:00:06.087) 0:00:10.715 **** 2026-02-04 03:09:34.494776 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:09:34.494789 | orchestrator | 2026-02-04 03:09:34.494802 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-04 03:09:34.494814 | orchestrator | Wednesday 04 February 2026 03:09:18 +0000 (0:00:03.040) 0:00:13.755 **** 2026-02-04 03:09:34.494828 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:09:34.494841 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-04 03:09:34.494855 | orchestrator | 2026-02-04 03:09:34.494868 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-04 03:09:34.494881 | orchestrator | Wednesday 04 February 2026 03:09:22 +0000 (0:00:03.876) 0:00:17.632 **** 2026-02-04 03:09:34.494894 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:09:34.494906 | orchestrator | 2026-02-04 03:09:34.494919 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-04 03:09:34.494932 | orchestrator | Wednesday 04 February 2026 03:09:25 +0000 (0:00:03.036) 0:00:20.668 **** 2026-02-04 03:09:34.494944 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-04 03:09:34.494957 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-04 03:09:34.494970 | orchestrator | 2026-02-04 03:09:34.494996 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-04 03:09:34.495010 | orchestrator | Wednesday 04 February 2026 03:09:32 +0000 (0:00:06.725) 0:00:27.394 **** 2026-02-04 03:09:34.495028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:34.495067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:34.495083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:34.495107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:34.495121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:34.495138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:34.495150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:34.495169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:40.186593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:40.186672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:40.186694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:40.186701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:40.186707 | orchestrator | 2026-02-04 03:09:40.186714 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 03:09:40.186720 | orchestrator | Wednesday 04 February 2026 03:09:34 +0000 (0:00:01.978) 0:00:29.373 **** 2026-02-04 03:09:40.186726 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:09:40.186733 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:09:40.186738 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:09:40.186743 | orchestrator | 2026-02-04 03:09:40.186748 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 03:09:40.186754 | orchestrator | Wednesday 04 February 2026 03:09:35 +0000 (0:00:00.490) 0:00:29.864 **** 2026-02-04 03:09:40.186759 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:09:40.186765 | orchestrator | 2026-02-04 03:09:40.186770 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-04 03:09:40.186776 | orchestrator | Wednesday 04 February 2026 03:09:35 +0000 (0:00:00.565) 0:00:30.429 **** 2026-02-04 03:09:40.186797 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-04 03:09:40.186803 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-04 03:09:40.186809 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-04 03:09:40.186814 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-04 03:09:40.186819 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-04 03:09:40.186824 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-04 03:09:40.186829 | orchestrator | 2026-02-04 03:09:40.186834 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-04 03:09:40.186839 | orchestrator | Wednesday 04 February 2026 03:09:37 +0000 (0:00:01.665) 0:00:32.094 **** 2026-02-04 03:09:40.186856 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 03:09:40.186864 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 03:09:40.186874 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 03:09:40.186880 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 03:09:40.186893 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 03:09:50.599304 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-04 03:09:50.599442 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 03:09:50.599489 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 03:09:50.599511 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 03:09:50.600345 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 03:09:50.600408 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 03:09:50.600430 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-04 03:09:50.600450 | orchestrator | 2026-02-04 03:09:50.600464 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-04 03:09:50.600477 | orchestrator | Wednesday 04 February 2026 03:09:40 +0000 (0:00:03.217) 0:00:35.312 **** 2026-02-04 03:09:50.600488 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:09:50.600500 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:09:50.600511 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-04 03:09:50.600522 | orchestrator | 2026-02-04 03:09:50.600533 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-04 03:09:50.600565 | orchestrator | Wednesday 04 February 2026 03:09:41 +0000 (0:00:01.438) 0:00:36.750 **** 2026-02-04 03:09:50.600578 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-04 03:09:50.600599 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-04 03:09:50.600610 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-04 03:09:50.600621 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 03:09:50.600632 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 03:09:50.600643 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-04 03:09:50.600654 | orchestrator | 2026-02-04 03:09:50.600675 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-04 03:09:50.600686 | orchestrator | Wednesday 04 February 2026 03:09:44 +0000 (0:00:02.543) 0:00:39.294 **** 2026-02-04 03:09:50.600698 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-04 03:09:50.600710 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-04 03:09:50.600721 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-04 03:09:50.600732 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-04 03:09:50.600743 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-04 03:09:50.600753 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-04 03:09:50.600764 | orchestrator | 2026-02-04 03:09:50.600775 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-04 03:09:50.600786 | orchestrator | Wednesday 04 February 2026 03:09:45 +0000 (0:00:00.982) 0:00:40.276 **** 2026-02-04 03:09:50.600797 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:09:50.600808 | orchestrator | 2026-02-04 03:09:50.600819 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-04 03:09:50.600829 | orchestrator | Wednesday 04 February 2026 03:09:45 +0000 (0:00:00.141) 0:00:40.417 **** 2026-02-04 03:09:50.600840 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:09:50.600851 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:09:50.600862 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:09:50.600872 | orchestrator | 2026-02-04 03:09:50.600883 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 03:09:50.600894 | orchestrator | Wednesday 04 February 2026 03:09:46 +0000 (0:00:00.514) 0:00:40.932 **** 2026-02-04 03:09:50.600905 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:09:50.600916 | orchestrator | 2026-02-04 03:09:50.600927 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-04 03:09:50.600938 | orchestrator | Wednesday 04 February 2026 03:09:46 +0000 (0:00:00.600) 0:00:41.532 **** 2026-02-04 03:09:50.600959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:51.485793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:51.485919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:51.485961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.485976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.485988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.486071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.486088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.486114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.486128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.486140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.486153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:51.486165 | orchestrator | 2026-02-04 03:09:51.486179 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-04 03:09:51.486193 | orchestrator | Wednesday 04 February 2026 03:09:50 +0000 (0:00:03.952) 0:00:45.485 **** 2026-02-04 03:09:51.486216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:09:51.598667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598790 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:09:51.598806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:09:51.598818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598896 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:09:51.598906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:09:51.598917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:09:51.598953 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:09:51.598964 | orchestrator | 2026-02-04 03:09:51.598975 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-04 03:09:51.598993 | orchestrator | Wednesday 04 February 2026 03:09:51 +0000 (0:00:00.901) 0:00:46.386 **** 2026-02-04 03:09:52.225965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:09:52.226124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:09:52.226143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:09:52.226156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:09:52.226169 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:09:52.226183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:09:52.226247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:09:52.226267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:09:52.226280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:09:52.226292 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:09:52.226304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:09:52.226315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:09:52.226342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:09:56.536415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:09:56.536666 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:09:56.536694 | orchestrator | 2026-02-04 03:09:56.536707 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-04 03:09:56.536720 | orchestrator | Wednesday 04 February 2026 03:09:52 +0000 (0:00:00.964) 0:00:47.351 **** 2026-02-04 03:09:56.536734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:56.536748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:56.536784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:09:56.536815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:56.536837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:56.536852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:56.536865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:56.536881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:56.536902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:09:56.536923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:09.295229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:09.295355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:09.295374 | orchestrator | 2026-02-04 03:10:09.295388 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-04 03:10:09.295401 | orchestrator | Wednesday 04 February 2026 03:09:56 +0000 (0:00:04.083) 0:00:51.435 **** 2026-02-04 03:10:09.295412 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 03:10:09.295424 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 03:10:09.295435 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-04 03:10:09.295446 | orchestrator | 2026-02-04 03:10:09.295457 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-04 03:10:09.295468 | orchestrator | Wednesday 04 February 2026 03:09:58 +0000 (0:00:01.914) 0:00:53.349 **** 2026-02-04 03:10:09.295503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:10:09.295517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:10:09.295648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:10:09.295675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:09.295689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:09.295712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:09.295725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:09.295739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:09.295767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:11.470381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:11.470485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:11.470528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:11.470542 | orchestrator | 2026-02-04 03:10:11.470627 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-04 03:10:11.470642 | orchestrator | Wednesday 04 February 2026 03:10:09 +0000 (0:00:10.824) 0:01:04.173 **** 2026-02-04 03:10:11.470653 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:10:11.470666 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:10:11.470677 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:10:11.470688 | orchestrator | 2026-02-04 03:10:11.470699 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-04 03:10:11.470710 | orchestrator | Wednesday 04 February 2026 03:10:10 +0000 (0:00:01.468) 0:01:05.642 **** 2026-02-04 03:10:11.470723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:10:11.470751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:10:11.470782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:10:11.470803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:10:11.470815 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:10:11.470826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:10:11.470838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:10:11.470850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:10:11.470876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:10:14.929281 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:10:14.929391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-04 03:10:14.929436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:10:14.929449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 03:10:14.929460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 03:10:14.929471 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:10:14.929482 | orchestrator | 2026-02-04 03:10:14.929493 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-04 03:10:14.929504 | orchestrator | Wednesday 04 February 2026 03:10:11 +0000 (0:00:00.738) 0:01:06.380 **** 2026-02-04 03:10:14.929514 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:10:14.929524 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:10:14.929534 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:10:14.929543 | orchestrator | 2026-02-04 03:10:14.929622 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-04 03:10:14.929635 | orchestrator | Wednesday 04 February 2026 03:10:12 +0000 (0:00:00.601) 0:01:06.982 **** 2026-02-04 03:10:14.929679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:10:14.929700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:10:14.929712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-04 03:10:14.929723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:14.929734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:14.929749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:10:14.929826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:11:45.936791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:11:45.936927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-04 03:11:45.936948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:11:45.936961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:11:45.936990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-04 03:11:45.937039 | orchestrator | 2026-02-04 03:11:45.937056 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-04 03:11:45.937069 | orchestrator | Wednesday 04 February 2026 03:10:15 +0000 (0:00:02.831) 0:01:09.813 **** 2026-02-04 03:11:45.937080 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:11:45.937093 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:11:45.937103 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:11:45.937114 | orchestrator | 2026-02-04 03:11:45.937125 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-04 03:11:45.937136 | orchestrator | Wednesday 04 February 2026 03:10:15 +0000 (0:00:00.309) 0:01:10.123 **** 2026-02-04 03:11:45.937147 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:11:45.937158 | orchestrator | 2026-02-04 03:11:45.937187 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-04 03:11:45.937198 | orchestrator | Wednesday 04 February 2026 03:10:17 +0000 (0:00:02.067) 0:01:12.191 **** 2026-02-04 03:11:45.937209 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:11:45.937220 | orchestrator | 2026-02-04 03:11:45.937232 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-04 03:11:45.937245 | orchestrator | Wednesday 04 February 2026 03:10:19 +0000 (0:00:02.202) 0:01:14.393 **** 2026-02-04 03:11:45.937258 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:11:45.937271 | orchestrator | 2026-02-04 03:11:45.937284 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 03:11:45.937297 | orchestrator | Wednesday 04 February 2026 03:10:38 +0000 (0:00:18.827) 0:01:33.220 **** 2026-02-04 03:11:45.937310 | orchestrator | 2026-02-04 03:11:45.937322 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 03:11:45.937336 | orchestrator | Wednesday 04 February 2026 03:10:38 +0000 (0:00:00.251) 0:01:33.471 **** 2026-02-04 03:11:45.937348 | orchestrator | 2026-02-04 03:11:45.937360 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-04 03:11:45.937373 | orchestrator | Wednesday 04 February 2026 03:10:38 +0000 (0:00:00.080) 0:01:33.552 **** 2026-02-04 03:11:45.937385 | orchestrator | 2026-02-04 03:11:45.937398 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-04 03:11:45.937411 | orchestrator | Wednesday 04 February 2026 03:10:38 +0000 (0:00:00.069) 0:01:33.622 **** 2026-02-04 03:11:45.937423 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:11:45.937436 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:11:45.937449 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:11:45.937461 | orchestrator | 2026-02-04 03:11:45.937473 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-04 03:11:45.937485 | orchestrator | Wednesday 04 February 2026 03:11:06 +0000 (0:00:27.894) 0:02:01.516 **** 2026-02-04 03:11:45.937499 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:11:45.937512 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:11:45.937524 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:11:45.937537 | orchestrator | 2026-02-04 03:11:45.937550 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-04 03:11:45.937563 | orchestrator | Wednesday 04 February 2026 03:11:16 +0000 (0:00:10.212) 0:02:11.728 **** 2026-02-04 03:11:45.937575 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:11:45.937589 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:11:45.937633 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:11:45.937645 | orchestrator | 2026-02-04 03:11:45.937666 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-04 03:11:45.937677 | orchestrator | Wednesday 04 February 2026 03:11:39 +0000 (0:00:22.909) 0:02:34.638 **** 2026-02-04 03:11:45.937687 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:11:45.937746 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:11:45.937760 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:11:45.937770 | orchestrator | 2026-02-04 03:11:45.937781 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-04 03:11:45.937793 | orchestrator | Wednesday 04 February 2026 03:11:45 +0000 (0:00:05.785) 0:02:40.424 **** 2026-02-04 03:11:45.937804 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:11:45.937814 | orchestrator | 2026-02-04 03:11:45.937825 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:11:45.937837 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 03:11:45.937849 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 03:11:45.937859 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 03:11:45.937870 | orchestrator | 2026-02-04 03:11:45.937881 | orchestrator | 2026-02-04 03:11:45.937892 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:11:45.937902 | orchestrator | Wednesday 04 February 2026 03:11:45 +0000 (0:00:00.277) 0:02:40.701 **** 2026-02-04 03:11:45.937913 | orchestrator | =============================================================================== 2026-02-04 03:11:45.937931 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.89s 2026-02-04 03:11:45.937943 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 22.91s 2026-02-04 03:11:45.937953 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.83s 2026-02-04 03:11:45.937964 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.82s 2026-02-04 03:11:45.937975 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.21s 2026-02-04 03:11:45.937985 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.73s 2026-02-04 03:11:45.937996 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.09s 2026-02-04 03:11:45.938006 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.79s 2026-02-04 03:11:45.938067 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.08s 2026-02-04 03:11:45.938080 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.95s 2026-02-04 03:11:45.938090 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.88s 2026-02-04 03:11:45.938101 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.22s 2026-02-04 03:11:45.938112 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.05s 2026-02-04 03:11:45.938122 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.04s 2026-02-04 03:11:45.938142 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.04s 2026-02-04 03:11:46.290905 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.83s 2026-02-04 03:11:46.291000 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.54s 2026-02-04 03:11:46.291014 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.20s 2026-02-04 03:11:46.291026 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.07s 2026-02-04 03:11:46.291037 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 1.98s 2026-02-04 03:11:48.641958 | orchestrator | 2026-02-04 03:11:48 | INFO  | Task 29220090-e876-4004-8a8c-b16c47c871c2 (barbican) was prepared for execution. 2026-02-04 03:11:48.642143 | orchestrator | 2026-02-04 03:11:48 | INFO  | It takes a moment until task 29220090-e876-4004-8a8c-b16c47c871c2 (barbican) has been started and output is visible here. 2026-02-04 03:12:30.204228 | orchestrator | 2026-02-04 03:12:30.204384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:12:30.204402 | orchestrator | 2026-02-04 03:12:30.204414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:12:30.204426 | orchestrator | Wednesday 04 February 2026 03:11:52 +0000 (0:00:00.260) 0:00:00.260 **** 2026-02-04 03:12:30.204439 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:12:30.204452 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:12:30.204463 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:12:30.204474 | orchestrator | 2026-02-04 03:12:30.204485 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:12:30.204496 | orchestrator | Wednesday 04 February 2026 03:11:53 +0000 (0:00:00.347) 0:00:00.608 **** 2026-02-04 03:12:30.204508 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-04 03:12:30.204520 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-04 03:12:30.204531 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-04 03:12:30.204542 | orchestrator | 2026-02-04 03:12:30.204553 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-04 03:12:30.204564 | orchestrator | 2026-02-04 03:12:30.204575 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 03:12:30.204586 | orchestrator | Wednesday 04 February 2026 03:11:53 +0000 (0:00:00.444) 0:00:01.052 **** 2026-02-04 03:12:30.204598 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:12:30.204609 | orchestrator | 2026-02-04 03:12:30.204646 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-04 03:12:30.204658 | orchestrator | Wednesday 04 February 2026 03:11:54 +0000 (0:00:00.539) 0:00:01.591 **** 2026-02-04 03:12:30.204670 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-04 03:12:30.204681 | orchestrator | 2026-02-04 03:12:30.204692 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-04 03:12:30.204702 | orchestrator | Wednesday 04 February 2026 03:11:57 +0000 (0:00:03.029) 0:00:04.621 **** 2026-02-04 03:12:30.204713 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-04 03:12:30.204727 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-04 03:12:30.204740 | orchestrator | 2026-02-04 03:12:30.204753 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-04 03:12:30.204766 | orchestrator | Wednesday 04 February 2026 03:12:03 +0000 (0:00:06.197) 0:00:10.819 **** 2026-02-04 03:12:30.204778 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:12:30.204791 | orchestrator | 2026-02-04 03:12:30.204804 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-04 03:12:30.204817 | orchestrator | Wednesday 04 February 2026 03:12:06 +0000 (0:00:03.060) 0:00:13.879 **** 2026-02-04 03:12:30.204830 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:12:30.204843 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-04 03:12:30.204856 | orchestrator | 2026-02-04 03:12:30.204890 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-04 03:12:30.204903 | orchestrator | Wednesday 04 February 2026 03:12:10 +0000 (0:00:03.726) 0:00:17.606 **** 2026-02-04 03:12:30.204917 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:12:30.204931 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-04 03:12:30.204945 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-04 03:12:30.204986 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-04 03:12:30.204999 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-04 03:12:30.205012 | orchestrator | 2026-02-04 03:12:30.205025 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-04 03:12:30.205037 | orchestrator | Wednesday 04 February 2026 03:12:25 +0000 (0:00:14.871) 0:00:32.478 **** 2026-02-04 03:12:30.205048 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-04 03:12:30.205059 | orchestrator | 2026-02-04 03:12:30.205070 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-04 03:12:30.205081 | orchestrator | Wednesday 04 February 2026 03:12:28 +0000 (0:00:03.569) 0:00:36.047 **** 2026-02-04 03:12:30.205096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:30.205134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:30.205147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:30.205166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:30.205190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:30.205202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:30.205223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:35.851606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:35.851755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:35.851769 | orchestrator | 2026-02-04 03:12:35.851780 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-04 03:12:35.851789 | orchestrator | Wednesday 04 February 2026 03:12:30 +0000 (0:00:01.565) 0:00:37.613 **** 2026-02-04 03:12:35.851797 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-04 03:12:35.851805 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-04 03:12:35.851812 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-04 03:12:35.851840 | orchestrator | 2026-02-04 03:12:35.851848 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-04 03:12:35.851855 | orchestrator | Wednesday 04 February 2026 03:12:31 +0000 (0:00:01.099) 0:00:38.712 **** 2026-02-04 03:12:35.851863 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:12:35.851871 | orchestrator | 2026-02-04 03:12:35.851878 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-04 03:12:35.851885 | orchestrator | Wednesday 04 February 2026 03:12:31 +0000 (0:00:00.343) 0:00:39.056 **** 2026-02-04 03:12:35.851905 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:12:35.851912 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:12:35.851919 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:12:35.851927 | orchestrator | 2026-02-04 03:12:35.851934 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 03:12:35.851941 | orchestrator | Wednesday 04 February 2026 03:12:31 +0000 (0:00:00.296) 0:00:39.352 **** 2026-02-04 03:12:35.851949 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:12:35.851957 | orchestrator | 2026-02-04 03:12:35.851964 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-04 03:12:35.851971 | orchestrator | Wednesday 04 February 2026 03:12:32 +0000 (0:00:00.556) 0:00:39.909 **** 2026-02-04 03:12:35.851979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:35.852004 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:35.852012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:35.852027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:35.852040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:35.852049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:35.852056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:35.852071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:37.307542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:37.307756 | orchestrator | 2026-02-04 03:12:37.307790 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-04 03:12:37.307812 | orchestrator | Wednesday 04 February 2026 03:12:35 +0000 (0:00:03.350) 0:00:43.259 **** 2026-02-04 03:12:37.307843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:37.307859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:37.307880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:37.307892 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:12:37.307905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:37.307936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:37.307958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:37.307976 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:12:37.308002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:37.308022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:37.308040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:37.308058 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:12:37.308075 | orchestrator | 2026-02-04 03:12:37.308094 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-04 03:12:37.308114 | orchestrator | Wednesday 04 February 2026 03:12:36 +0000 (0:00:00.657) 0:00:43.917 **** 2026-02-04 03:12:37.308207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:40.555035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:40.555178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:40.555215 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:12:40.555285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:40.555301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:40.555313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:40.555347 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:12:40.555380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:40.555393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:40.555411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:40.555423 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:12:40.555435 | orchestrator | 2026-02-04 03:12:40.555447 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-04 03:12:40.555460 | orchestrator | Wednesday 04 February 2026 03:12:37 +0000 (0:00:00.813) 0:00:44.730 **** 2026-02-04 03:12:40.555472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:40.555485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:40.555514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:49.835811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:49.835932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:49.835950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:49.835963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:49.835999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:49.836012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:49.836024 | orchestrator | 2026-02-04 03:12:49.836038 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-04 03:12:49.836051 | orchestrator | Wednesday 04 February 2026 03:12:40 +0000 (0:00:03.230) 0:00:47.960 **** 2026-02-04 03:12:49.836063 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:12:49.836075 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:12:49.836086 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:12:49.836097 | orchestrator | 2026-02-04 03:12:49.836124 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-04 03:12:49.836136 | orchestrator | Wednesday 04 February 2026 03:12:42 +0000 (0:00:01.470) 0:00:49.430 **** 2026-02-04 03:12:49.836147 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:12:49.836158 | orchestrator | 2026-02-04 03:12:49.836169 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-04 03:12:49.836180 | orchestrator | Wednesday 04 February 2026 03:12:42 +0000 (0:00:00.966) 0:00:50.396 **** 2026-02-04 03:12:49.836191 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:12:49.836202 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:12:49.836213 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:12:49.836223 | orchestrator | 2026-02-04 03:12:49.836235 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-04 03:12:49.836246 | orchestrator | Wednesday 04 February 2026 03:12:43 +0000 (0:00:00.609) 0:00:51.006 **** 2026-02-04 03:12:49.836362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:49.836385 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:49.836409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:49.836431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:50.710482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:50.710562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:50.710571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:50.710596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:50.710603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:50.710610 | orchestrator | 2026-02-04 03:12:50.710618 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-04 03:12:50.710625 | orchestrator | Wednesday 04 February 2026 03:12:49 +0000 (0:00:06.248) 0:00:57.255 **** 2026-02-04 03:12:50.710947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:50.710967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:50.710975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:50.710992 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:12:50.711000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:50.711006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:50.711013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:50.711019 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:12:50.711035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-04 03:12:52.924131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:12:52.924264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:12:52.924283 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:12:52.924298 | orchestrator | 2026-02-04 03:12:52.924310 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-04 03:12:52.924323 | orchestrator | Wednesday 04 February 2026 03:12:50 +0000 (0:00:00.869) 0:00:58.124 **** 2026-02-04 03:12:52.924335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:52.924348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:52.924396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-04 03:12:52.924410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:52.924430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:52.924442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:52.924453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:52.924465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:52.924477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:12:52.924488 | orchestrator | 2026-02-04 03:12:52.924504 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-04 03:12:52.924531 | orchestrator | Wednesday 04 February 2026 03:12:52 +0000 (0:00:02.212) 0:01:00.337 **** 2026-02-04 03:13:40.527122 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:13:40.527227 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:13:40.527235 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:13:40.527240 | orchestrator | 2026-02-04 03:13:40.527245 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-04 03:13:40.527251 | orchestrator | Wednesday 04 February 2026 03:12:53 +0000 (0:00:00.300) 0:01:00.637 **** 2026-02-04 03:13:40.527255 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:13:40.527259 | orchestrator | 2026-02-04 03:13:40.527263 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-04 03:13:40.527267 | orchestrator | Wednesday 04 February 2026 03:12:55 +0000 (0:00:01.979) 0:01:02.616 **** 2026-02-04 03:13:40.527271 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:13:40.527275 | orchestrator | 2026-02-04 03:13:40.527279 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-04 03:13:40.527283 | orchestrator | Wednesday 04 February 2026 03:12:57 +0000 (0:00:02.255) 0:01:04.872 **** 2026-02-04 03:13:40.527287 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:13:40.527290 | orchestrator | 2026-02-04 03:13:40.527294 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 03:13:40.527298 | orchestrator | Wednesday 04 February 2026 03:13:08 +0000 (0:00:11.009) 0:01:15.881 **** 2026-02-04 03:13:40.527302 | orchestrator | 2026-02-04 03:13:40.527306 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 03:13:40.527309 | orchestrator | Wednesday 04 February 2026 03:13:08 +0000 (0:00:00.249) 0:01:16.131 **** 2026-02-04 03:13:40.527313 | orchestrator | 2026-02-04 03:13:40.527317 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-04 03:13:40.527321 | orchestrator | Wednesday 04 February 2026 03:13:08 +0000 (0:00:00.069) 0:01:16.201 **** 2026-02-04 03:13:40.527325 | orchestrator | 2026-02-04 03:13:40.527328 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-04 03:13:40.527332 | orchestrator | Wednesday 04 February 2026 03:13:08 +0000 (0:00:00.073) 0:01:16.274 **** 2026-02-04 03:13:40.527337 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:13:40.527343 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:13:40.527350 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:13:40.527356 | orchestrator | 2026-02-04 03:13:40.527362 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-04 03:13:40.527368 | orchestrator | Wednesday 04 February 2026 03:13:19 +0000 (0:00:10.890) 0:01:27.164 **** 2026-02-04 03:13:40.527374 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:13:40.527380 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:13:40.527387 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:13:40.527393 | orchestrator | 2026-02-04 03:13:40.527398 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-04 03:13:40.527403 | orchestrator | Wednesday 04 February 2026 03:13:30 +0000 (0:00:10.269) 0:01:37.433 **** 2026-02-04 03:13:40.527409 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:13:40.527415 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:13:40.527421 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:13:40.527427 | orchestrator | 2026-02-04 03:13:40.527432 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:13:40.527439 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 03:13:40.527447 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 03:13:40.527453 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 03:13:40.527460 | orchestrator | 2026-02-04 03:13:40.527487 | orchestrator | 2026-02-04 03:13:40.527494 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:13:40.527500 | orchestrator | Wednesday 04 February 2026 03:13:40 +0000 (0:00:10.171) 0:01:47.605 **** 2026-02-04 03:13:40.527506 | orchestrator | =============================================================================== 2026-02-04 03:13:40.527512 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.87s 2026-02-04 03:13:40.527518 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.01s 2026-02-04 03:13:40.527524 | orchestrator | barbican : Restart barbican-api container ------------------------------ 10.89s 2026-02-04 03:13:40.527531 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.27s 2026-02-04 03:13:40.527537 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.17s 2026-02-04 03:13:40.527542 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.25s 2026-02-04 03:13:40.527548 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.20s 2026-02-04 03:13:40.527554 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.73s 2026-02-04 03:13:40.527560 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.57s 2026-02-04 03:13:40.527566 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.35s 2026-02-04 03:13:40.527572 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.23s 2026-02-04 03:13:40.527579 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.06s 2026-02-04 03:13:40.527585 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.03s 2026-02-04 03:13:40.527591 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.26s 2026-02-04 03:13:40.527610 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.21s 2026-02-04 03:13:40.527633 | orchestrator | barbican : Creating barbican database ----------------------------------- 1.98s 2026-02-04 03:13:40.527640 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.57s 2026-02-04 03:13:40.527646 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.47s 2026-02-04 03:13:40.527653 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.10s 2026-02-04 03:13:40.527702 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.97s 2026-02-04 03:13:42.906377 | orchestrator | 2026-02-04 03:13:42 | INFO  | Task cbdfddf7-f442-4e47-8f90-1de17c151e1e (designate) was prepared for execution. 2026-02-04 03:13:42.906471 | orchestrator | 2026-02-04 03:13:42 | INFO  | It takes a moment until task cbdfddf7-f442-4e47-8f90-1de17c151e1e (designate) has been started and output is visible here. 2026-02-04 03:14:13.674268 | orchestrator | 2026-02-04 03:14:13.674390 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:14:13.674408 | orchestrator | 2026-02-04 03:14:13.674420 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:14:13.674432 | orchestrator | Wednesday 04 February 2026 03:13:47 +0000 (0:00:00.256) 0:00:00.256 **** 2026-02-04 03:14:13.674443 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:14:13.674456 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:14:13.674467 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:14:13.674478 | orchestrator | 2026-02-04 03:14:13.674489 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:14:13.674500 | orchestrator | Wednesday 04 February 2026 03:13:47 +0000 (0:00:00.329) 0:00:00.586 **** 2026-02-04 03:14:13.674512 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-04 03:14:13.674524 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-04 03:14:13.674535 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-04 03:14:13.674546 | orchestrator | 2026-02-04 03:14:13.674557 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-04 03:14:13.674593 | orchestrator | 2026-02-04 03:14:13.674604 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 03:14:13.674615 | orchestrator | Wednesday 04 February 2026 03:13:47 +0000 (0:00:00.446) 0:00:01.033 **** 2026-02-04 03:14:13.674627 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:14:13.674639 | orchestrator | 2026-02-04 03:14:13.674650 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-04 03:14:13.674661 | orchestrator | Wednesday 04 February 2026 03:13:48 +0000 (0:00:00.546) 0:00:01.579 **** 2026-02-04 03:14:13.674672 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-04 03:14:13.674799 | orchestrator | 2026-02-04 03:14:13.674813 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-04 03:14:13.674825 | orchestrator | Wednesday 04 February 2026 03:13:51 +0000 (0:00:03.471) 0:00:05.051 **** 2026-02-04 03:14:13.674838 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-04 03:14:13.674851 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-04 03:14:13.674864 | orchestrator | 2026-02-04 03:14:13.674877 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-04 03:14:13.674890 | orchestrator | Wednesday 04 February 2026 03:13:58 +0000 (0:00:06.103) 0:00:11.155 **** 2026-02-04 03:14:13.674903 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:14:13.674915 | orchestrator | 2026-02-04 03:14:13.674928 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-04 03:14:13.674941 | orchestrator | Wednesday 04 February 2026 03:14:01 +0000 (0:00:03.103) 0:00:14.259 **** 2026-02-04 03:14:13.674953 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:14:13.674966 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-04 03:14:13.674978 | orchestrator | 2026-02-04 03:14:13.674991 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-04 03:14:13.675003 | orchestrator | Wednesday 04 February 2026 03:14:05 +0000 (0:00:03.870) 0:00:18.129 **** 2026-02-04 03:14:13.675016 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:14:13.675029 | orchestrator | 2026-02-04 03:14:13.675041 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-04 03:14:13.675054 | orchestrator | Wednesday 04 February 2026 03:14:08 +0000 (0:00:03.133) 0:00:21.262 **** 2026-02-04 03:14:13.675066 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-04 03:14:13.675078 | orchestrator | 2026-02-04 03:14:13.675091 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-04 03:14:13.675103 | orchestrator | Wednesday 04 February 2026 03:14:11 +0000 (0:00:03.577) 0:00:24.840 **** 2026-02-04 03:14:13.675136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:13.675175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:13.675200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:13.675213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:13.675226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:13.675243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:13.675255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:13.675283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:19.680672 | orchestrator | 2026-02-04 03:14:19.680779 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-04 03:14:19.680795 | orchestrator | Wednesday 04 February 2026 03:14:14 +0000 (0:00:02.680) 0:00:27.520 **** 2026-02-04 03:14:19.680806 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:14:19.680819 | orchestrator | 2026-02-04 03:14:19.680830 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-04 03:14:19.680842 | orchestrator | Wednesday 04 February 2026 03:14:14 +0000 (0:00:00.117) 0:00:27.638 **** 2026-02-04 03:14:19.680853 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:14:19.680864 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:14:19.680875 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:14:19.680886 | orchestrator | 2026-02-04 03:14:19.680897 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 03:14:19.680918 | orchestrator | Wednesday 04 February 2026 03:14:14 +0000 (0:00:00.482) 0:00:28.120 **** 2026-02-04 03:14:19.680933 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:14:19.680947 | orchestrator | 2026-02-04 03:14:19.680960 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-04 03:14:19.680981 | orchestrator | Wednesday 04 February 2026 03:14:15 +0000 (0:00:00.552) 0:00:28.673 **** 2026-02-04 03:14:19.680996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:19.681021 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:21.524376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:21.524510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:21.524949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:22.409403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:22.409508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:22.409553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:22.409567 | orchestrator | 2026-02-04 03:14:22.409581 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-04 03:14:22.409593 | orchestrator | Wednesday 04 February 2026 03:14:21 +0000 (0:00:05.963) 0:00:34.637 **** 2026-02-04 03:14:22.409621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:22.409636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:22.409665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:22.409718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:22.409731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:22.409752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:22.409764 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:14:22.409796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:22.409810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:22.409821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:22.409840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.169380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.169507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.169525 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:14:23.169555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:23.169570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:23.169582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.169594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.169630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.169643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.169655 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:14:23.169667 | orchestrator | 2026-02-04 03:14:23.169724 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-04 03:14:23.169739 | orchestrator | Wednesday 04 February 2026 03:14:22 +0000 (0:00:00.995) 0:00:35.632 **** 2026-02-04 03:14:23.169757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:23.169769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:23.169781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.169800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509309 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:14:23.509333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:23.509345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:23.509358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509464 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:14:23.509483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:23.509495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:23.509509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:23.509551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:27.727262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:27.727406 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:14:27.727428 | orchestrator | 2026-02-04 03:14:27.727442 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-04 03:14:27.727455 | orchestrator | Wednesday 04 February 2026 03:14:23 +0000 (0:00:00.989) 0:00:36.621 **** 2026-02-04 03:14:27.727485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:27.727500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:27.727513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:27.727568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:27.727584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:27.727601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:27.727614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:27.727626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:27.727646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:27.727657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:27.727678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:39.124749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:39.124912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:39.124940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:39.124961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:39.125007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:39.125026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:39.125070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:39.125090 | orchestrator | 2026-02-04 03:14:39.125111 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-04 03:14:39.125132 | orchestrator | Wednesday 04 February 2026 03:14:29 +0000 (0:00:05.982) 0:00:42.604 **** 2026-02-04 03:14:39.125159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:39.125181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:39.125213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:39.125237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:39.125274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:46.922930 | orchestrator | 2026-02-04 03:14:46.922940 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-04 03:14:46.922950 | orchestrator | Wednesday 04 February 2026 03:14:43 +0000 (0:00:13.943) 0:00:56.548 **** 2026-02-04 03:14:46.922963 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 03:14:51.152277 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 03:14:51.152386 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-04 03:14:51.152402 | orchestrator | 2026-02-04 03:14:51.152417 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-04 03:14:51.152429 | orchestrator | Wednesday 04 February 2026 03:14:46 +0000 (0:00:03.488) 0:01:00.036 **** 2026-02-04 03:14:51.152441 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 03:14:51.152485 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 03:14:51.152498 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-04 03:14:51.152533 | orchestrator | 2026-02-04 03:14:51.152545 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-04 03:14:51.152556 | orchestrator | Wednesday 04 February 2026 03:14:49 +0000 (0:00:02.421) 0:01:02.458 **** 2026-02-04 03:14:51.152571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:51.152587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:51.152600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:51.152629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:51.152648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:51.152668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:51.152681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:51.152693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:51.152770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:51.152782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:51.152802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:53.983654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:53.983817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:53.983836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:53.983850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:53.983862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:53.983875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:53.983911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:53.983951 | orchestrator | 2026-02-04 03:14:53.983966 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-04 03:14:53.983978 | orchestrator | Wednesday 04 February 2026 03:14:52 +0000 (0:00:02.841) 0:01:05.299 **** 2026-02-04 03:14:53.983991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:53.984004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:53.984015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:53.984027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:53.984052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.917682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.917845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.917864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:54.917877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.917889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.917900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.917977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:54.918000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.918091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.918120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:54.918140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:54.918161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:54.918186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:14:54.918200 | orchestrator | 2026-02-04 03:14:54.918215 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 03:14:54.918248 | orchestrator | Wednesday 04 February 2026 03:14:54 +0000 (0:00:02.732) 0:01:08.032 **** 2026-02-04 03:14:55.894156 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:14:55.894260 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:14:55.894275 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:14:55.894287 | orchestrator | 2026-02-04 03:14:55.894300 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-04 03:14:55.894312 | orchestrator | Wednesday 04 February 2026 03:14:55 +0000 (0:00:00.297) 0:01:08.330 **** 2026-02-04 03:14:55.894327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:55.894343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:55.894356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:55.894368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:55.894407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:55.894452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:55.894466 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:14:55.894478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:55.894490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:55.894502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:55.894514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:55.894541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:55.894578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:59.122291 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:14:59.122404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-04 03:14:59.122425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 03:14:59.122439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 03:14:59.122478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 03:14:59.122491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 03:14:59.122517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:14:59.122530 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:14:59.122542 | orchestrator | 2026-02-04 03:14:59.122573 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-04 03:14:59.122586 | orchestrator | Wednesday 04 February 2026 03:14:55 +0000 (0:00:00.780) 0:01:09.111 **** 2026-02-04 03:14:59.122598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:59.122620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:59.122641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-04 03:14:59.122672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:14:59.122786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:15:00.852561 | orchestrator | 2026-02-04 03:15:00.852582 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-04 03:15:00.852611 | orchestrator | Wednesday 04 February 2026 03:15:00 +0000 (0:00:04.284) 0:01:13.396 **** 2026-02-04 03:15:00.852633 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:15:00.852663 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:16:25.202274 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:16:25.202382 | orchestrator | 2026-02-04 03:16:25.202397 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-04 03:16:25.202409 | orchestrator | Wednesday 04 February 2026 03:15:00 +0000 (0:00:00.571) 0:01:13.968 **** 2026-02-04 03:16:25.202420 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-04 03:16:25.202430 | orchestrator | 2026-02-04 03:16:25.202440 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-04 03:16:25.202449 | orchestrator | Wednesday 04 February 2026 03:15:02 +0000 (0:00:02.055) 0:01:16.023 **** 2026-02-04 03:16:25.202459 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-04 03:16:25.202470 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-04 03:16:25.202479 | orchestrator | 2026-02-04 03:16:25.202489 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-04 03:16:25.202499 | orchestrator | Wednesday 04 February 2026 03:15:05 +0000 (0:00:02.183) 0:01:18.206 **** 2026-02-04 03:16:25.202508 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:16:25.202518 | orchestrator | 2026-02-04 03:16:25.202527 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 03:16:25.202562 | orchestrator | Wednesday 04 February 2026 03:15:20 +0000 (0:00:15.402) 0:01:33.608 **** 2026-02-04 03:16:25.202573 | orchestrator | 2026-02-04 03:16:25.202582 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 03:16:25.202592 | orchestrator | Wednesday 04 February 2026 03:15:20 +0000 (0:00:00.073) 0:01:33.682 **** 2026-02-04 03:16:25.202601 | orchestrator | 2026-02-04 03:16:25.202611 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-04 03:16:25.202620 | orchestrator | Wednesday 04 February 2026 03:15:20 +0000 (0:00:00.068) 0:01:33.751 **** 2026-02-04 03:16:25.202631 | orchestrator | 2026-02-04 03:16:25.202641 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-04 03:16:25.202651 | orchestrator | Wednesday 04 February 2026 03:15:20 +0000 (0:00:00.072) 0:01:33.824 **** 2026-02-04 03:16:25.202660 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:16:25.202670 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:16:25.202679 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:16:25.202689 | orchestrator | 2026-02-04 03:16:25.202699 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-04 03:16:25.202708 | orchestrator | Wednesday 04 February 2026 03:15:29 +0000 (0:00:08.623) 0:01:42.447 **** 2026-02-04 03:16:25.202718 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:16:25.202727 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:16:25.202737 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:16:25.202773 | orchestrator | 2026-02-04 03:16:25.202783 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-04 03:16:25.202793 | orchestrator | Wednesday 04 February 2026 03:15:35 +0000 (0:00:05.688) 0:01:48.136 **** 2026-02-04 03:16:25.202803 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:16:25.202812 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:16:25.202825 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:16:25.202837 | orchestrator | 2026-02-04 03:16:25.202848 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-04 03:16:25.202860 | orchestrator | Wednesday 04 February 2026 03:15:45 +0000 (0:00:10.596) 0:01:58.733 **** 2026-02-04 03:16:25.202872 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:16:25.202882 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:16:25.202893 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:16:25.202904 | orchestrator | 2026-02-04 03:16:25.202916 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-04 03:16:25.202927 | orchestrator | Wednesday 04 February 2026 03:15:56 +0000 (0:00:10.771) 0:02:09.504 **** 2026-02-04 03:16:25.202939 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:16:25.202950 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:16:25.202962 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:16:25.202973 | orchestrator | 2026-02-04 03:16:25.202983 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-04 03:16:25.202993 | orchestrator | Wednesday 04 February 2026 03:16:07 +0000 (0:00:10.677) 0:02:20.181 **** 2026-02-04 03:16:25.203003 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:16:25.203012 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:16:25.203022 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:16:25.203031 | orchestrator | 2026-02-04 03:16:25.203041 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-04 03:16:25.203051 | orchestrator | Wednesday 04 February 2026 03:16:18 +0000 (0:00:11.106) 0:02:31.288 **** 2026-02-04 03:16:25.203060 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:16:25.203070 | orchestrator | 2026-02-04 03:16:25.203079 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:16:25.203090 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 03:16:25.203101 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 03:16:25.203118 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 03:16:25.203128 | orchestrator | 2026-02-04 03:16:25.203138 | orchestrator | 2026-02-04 03:16:25.203148 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:16:25.203157 | orchestrator | Wednesday 04 February 2026 03:16:24 +0000 (0:00:06.639) 0:02:37.927 **** 2026-02-04 03:16:25.203167 | orchestrator | =============================================================================== 2026-02-04 03:16:25.203189 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.40s 2026-02-04 03:16:25.203199 | orchestrator | designate : Copying over designate.conf -------------------------------- 13.94s 2026-02-04 03:16:25.203225 | orchestrator | designate : Restart designate-worker container ------------------------- 11.11s 2026-02-04 03:16:25.203235 | orchestrator | designate : Restart designate-producer container ----------------------- 10.77s 2026-02-04 03:16:25.203245 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.68s 2026-02-04 03:16:25.203255 | orchestrator | designate : Restart designate-central container ------------------------ 10.60s 2026-02-04 03:16:25.203264 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.62s 2026-02-04 03:16:25.203274 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 6.64s 2026-02-04 03:16:25.203283 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.10s 2026-02-04 03:16:25.203293 | orchestrator | designate : Copying over config.json files for services ----------------- 5.98s 2026-02-04 03:16:25.203302 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.96s 2026-02-04 03:16:25.203312 | orchestrator | designate : Restart designate-api container ----------------------------- 5.69s 2026-02-04 03:16:25.203321 | orchestrator | designate : Check designate containers ---------------------------------- 4.28s 2026-02-04 03:16:25.203331 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.87s 2026-02-04 03:16:25.203340 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.58s 2026-02-04 03:16:25.203350 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.49s 2026-02-04 03:16:25.203360 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.47s 2026-02-04 03:16:25.203369 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.13s 2026-02-04 03:16:25.203379 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.10s 2026-02-04 03:16:25.203388 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.84s 2026-02-04 03:16:27.560953 | orchestrator | 2026-02-04 03:16:27 | INFO  | Task 9837ea51-0fd0-43e4-af48-bb553c78e356 (octavia) was prepared for execution. 2026-02-04 03:16:27.561033 | orchestrator | 2026-02-04 03:16:27 | INFO  | It takes a moment until task 9837ea51-0fd0-43e4-af48-bb553c78e356 (octavia) has been started and output is visible here. 2026-02-04 03:18:28.332422 | orchestrator | 2026-02-04 03:18:28.332538 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:18:28.332556 | orchestrator | 2026-02-04 03:18:28.332570 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:18:28.332582 | orchestrator | Wednesday 04 February 2026 03:16:31 +0000 (0:00:00.272) 0:00:00.272 **** 2026-02-04 03:18:28.332593 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:28.332605 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:18:28.332617 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:18:28.332628 | orchestrator | 2026-02-04 03:18:28.332639 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:18:28.332650 | orchestrator | Wednesday 04 February 2026 03:16:32 +0000 (0:00:00.361) 0:00:00.633 **** 2026-02-04 03:18:28.332661 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-04 03:18:28.332702 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-04 03:18:28.332714 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-04 03:18:28.332725 | orchestrator | 2026-02-04 03:18:28.332736 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-04 03:18:28.332746 | orchestrator | 2026-02-04 03:18:28.332757 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 03:18:28.332768 | orchestrator | Wednesday 04 February 2026 03:16:32 +0000 (0:00:00.446) 0:00:01.080 **** 2026-02-04 03:18:28.332780 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:18:28.332792 | orchestrator | 2026-02-04 03:18:28.332835 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-04 03:18:28.332848 | orchestrator | Wednesday 04 February 2026 03:16:33 +0000 (0:00:00.562) 0:00:01.642 **** 2026-02-04 03:18:28.332860 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-04 03:18:28.332871 | orchestrator | 2026-02-04 03:18:28.332882 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-04 03:18:28.332893 | orchestrator | Wednesday 04 February 2026 03:16:36 +0000 (0:00:03.352) 0:00:04.995 **** 2026-02-04 03:18:28.332904 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-04 03:18:28.332915 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-04 03:18:28.332925 | orchestrator | 2026-02-04 03:18:28.332936 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-04 03:18:28.332948 | orchestrator | Wednesday 04 February 2026 03:16:42 +0000 (0:00:06.278) 0:00:11.273 **** 2026-02-04 03:18:28.332961 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:18:28.332973 | orchestrator | 2026-02-04 03:18:28.332986 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-04 03:18:28.332999 | orchestrator | Wednesday 04 February 2026 03:16:45 +0000 (0:00:03.216) 0:00:14.490 **** 2026-02-04 03:18:28.333011 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:18:28.333024 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-04 03:18:28.333036 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-04 03:18:28.333049 | orchestrator | 2026-02-04 03:18:28.333084 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-04 03:18:28.333098 | orchestrator | Wednesday 04 February 2026 03:16:53 +0000 (0:00:07.915) 0:00:22.406 **** 2026-02-04 03:18:28.333110 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:18:28.333122 | orchestrator | 2026-02-04 03:18:28.333135 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-04 03:18:28.333147 | orchestrator | Wednesday 04 February 2026 03:16:57 +0000 (0:00:03.236) 0:00:25.642 **** 2026-02-04 03:18:28.333159 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-04 03:18:28.333172 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-04 03:18:28.333184 | orchestrator | 2026-02-04 03:18:28.333197 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-04 03:18:28.333209 | orchestrator | Wednesday 04 February 2026 03:17:03 +0000 (0:00:06.908) 0:00:32.551 **** 2026-02-04 03:18:28.333221 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-04 03:18:28.333234 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-04 03:18:28.333246 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-04 03:18:28.333258 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-04 03:18:28.333271 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-04 03:18:28.333284 | orchestrator | 2026-02-04 03:18:28.333296 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 03:18:28.333319 | orchestrator | Wednesday 04 February 2026 03:17:19 +0000 (0:00:15.215) 0:00:47.767 **** 2026-02-04 03:18:28.333330 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:18:28.333341 | orchestrator | 2026-02-04 03:18:28.333352 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-04 03:18:28.333370 | orchestrator | Wednesday 04 February 2026 03:17:19 +0000 (0:00:00.748) 0:00:48.515 **** 2026-02-04 03:18:28.333389 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.333411 | orchestrator | 2026-02-04 03:18:28.333437 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-04 03:18:28.333455 | orchestrator | Wednesday 04 February 2026 03:17:24 +0000 (0:00:04.475) 0:00:52.991 **** 2026-02-04 03:18:28.333471 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.333489 | orchestrator | 2026-02-04 03:18:28.333505 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-04 03:18:28.333544 | orchestrator | Wednesday 04 February 2026 03:17:28 +0000 (0:00:04.215) 0:00:57.206 **** 2026-02-04 03:18:28.333565 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:28.333584 | orchestrator | 2026-02-04 03:18:28.333602 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-04 03:18:28.333620 | orchestrator | Wednesday 04 February 2026 03:17:31 +0000 (0:00:03.022) 0:01:00.228 **** 2026-02-04 03:18:28.333635 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-04 03:18:28.333646 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-04 03:18:28.333657 | orchestrator | 2026-02-04 03:18:28.333667 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-04 03:18:28.333678 | orchestrator | Wednesday 04 February 2026 03:17:41 +0000 (0:00:09.971) 0:01:10.199 **** 2026-02-04 03:18:28.333689 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-04 03:18:28.333701 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-04 03:18:28.333713 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-04 03:18:28.333725 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-04 03:18:28.333741 | orchestrator | 2026-02-04 03:18:28.333759 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-04 03:18:28.333778 | orchestrator | Wednesday 04 February 2026 03:17:56 +0000 (0:00:14.603) 0:01:24.802 **** 2026-02-04 03:18:28.333796 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.333839 | orchestrator | 2026-02-04 03:18:28.333857 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-04 03:18:28.333875 | orchestrator | Wednesday 04 February 2026 03:18:00 +0000 (0:00:04.334) 0:01:29.137 **** 2026-02-04 03:18:28.333894 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.333913 | orchestrator | 2026-02-04 03:18:28.333932 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-04 03:18:28.333950 | orchestrator | Wednesday 04 February 2026 03:18:05 +0000 (0:00:05.019) 0:01:34.156 **** 2026-02-04 03:18:28.333965 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:18:28.333976 | orchestrator | 2026-02-04 03:18:28.333987 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-04 03:18:28.333998 | orchestrator | Wednesday 04 February 2026 03:18:05 +0000 (0:00:00.232) 0:01:34.389 **** 2026-02-04 03:18:28.334008 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:28.334090 | orchestrator | 2026-02-04 03:18:28.334104 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 03:18:28.334116 | orchestrator | Wednesday 04 February 2026 03:18:10 +0000 (0:00:04.696) 0:01:39.086 **** 2026-02-04 03:18:28.334138 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:18:28.334150 | orchestrator | 2026-02-04 03:18:28.334161 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-04 03:18:28.334180 | orchestrator | Wednesday 04 February 2026 03:18:11 +0000 (0:00:01.099) 0:01:40.186 **** 2026-02-04 03:18:28.334191 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:18:28.334202 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:18:28.334213 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.334224 | orchestrator | 2026-02-04 03:18:28.334235 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-04 03:18:28.334246 | orchestrator | Wednesday 04 February 2026 03:18:16 +0000 (0:00:04.881) 0:01:45.067 **** 2026-02-04 03:18:28.334256 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:18:28.334267 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:18:28.334278 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.334288 | orchestrator | 2026-02-04 03:18:28.334299 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-04 03:18:28.334310 | orchestrator | Wednesday 04 February 2026 03:18:20 +0000 (0:00:04.408) 0:01:49.476 **** 2026-02-04 03:18:28.334321 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.334332 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:18:28.334342 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:18:28.334359 | orchestrator | 2026-02-04 03:18:28.334377 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-04 03:18:28.334397 | orchestrator | Wednesday 04 February 2026 03:18:21 +0000 (0:00:01.000) 0:01:50.477 **** 2026-02-04 03:18:28.334415 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:18:28.334435 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:18:28.334455 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:28.334474 | orchestrator | 2026-02-04 03:18:28.334491 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-04 03:18:28.334502 | orchestrator | Wednesday 04 February 2026 03:18:23 +0000 (0:00:01.794) 0:01:52.271 **** 2026-02-04 03:18:28.334512 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:18:28.334523 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.334534 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:18:28.334545 | orchestrator | 2026-02-04 03:18:28.334556 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-04 03:18:28.334567 | orchestrator | Wednesday 04 February 2026 03:18:24 +0000 (0:00:01.275) 0:01:53.547 **** 2026-02-04 03:18:28.334578 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.334589 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:18:28.334599 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:18:28.334610 | orchestrator | 2026-02-04 03:18:28.334621 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-04 03:18:28.334632 | orchestrator | Wednesday 04 February 2026 03:18:26 +0000 (0:00:01.185) 0:01:54.733 **** 2026-02-04 03:18:28.334643 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:18:28.334654 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:18:28.334664 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:28.334675 | orchestrator | 2026-02-04 03:18:28.334698 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-04 03:18:56.203649 | orchestrator | Wednesday 04 February 2026 03:18:28 +0000 (0:00:02.152) 0:01:56.885 **** 2026-02-04 03:18:56.203773 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:18:56.203866 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:18:56.203890 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:18:56.203902 | orchestrator | 2026-02-04 03:18:56.203915 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-04 03:18:56.203929 | orchestrator | Wednesday 04 February 2026 03:18:29 +0000 (0:00:01.469) 0:01:58.354 **** 2026-02-04 03:18:56.203949 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:56.203970 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:18:56.204020 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:18:56.204041 | orchestrator | 2026-02-04 03:18:56.204059 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-04 03:18:56.204079 | orchestrator | Wednesday 04 February 2026 03:18:30 +0000 (0:00:00.643) 0:01:58.998 **** 2026-02-04 03:18:56.204095 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:18:56.204106 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:18:56.204120 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:56.204139 | orchestrator | 2026-02-04 03:18:56.204159 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 03:18:56.204177 | orchestrator | Wednesday 04 February 2026 03:18:35 +0000 (0:00:04.918) 0:02:03.916 **** 2026-02-04 03:18:56.204195 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:18:56.204209 | orchestrator | 2026-02-04 03:18:56.204222 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-04 03:18:56.204235 | orchestrator | Wednesday 04 February 2026 03:18:35 +0000 (0:00:00.532) 0:02:04.449 **** 2026-02-04 03:18:56.204248 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:56.204260 | orchestrator | 2026-02-04 03:18:56.204274 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-04 03:18:56.204286 | orchestrator | Wednesday 04 February 2026 03:18:39 +0000 (0:00:03.605) 0:02:08.054 **** 2026-02-04 03:18:56.204297 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:56.204307 | orchestrator | 2026-02-04 03:18:56.204318 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-04 03:18:56.204329 | orchestrator | Wednesday 04 February 2026 03:18:42 +0000 (0:00:03.082) 0:02:11.137 **** 2026-02-04 03:18:56.204340 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-04 03:18:56.204352 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-04 03:18:56.204364 | orchestrator | 2026-02-04 03:18:56.204374 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-04 03:18:56.204385 | orchestrator | Wednesday 04 February 2026 03:18:49 +0000 (0:00:07.056) 0:02:18.193 **** 2026-02-04 03:18:56.204396 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:56.204407 | orchestrator | 2026-02-04 03:18:56.204417 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-04 03:18:56.204428 | orchestrator | Wednesday 04 February 2026 03:18:53 +0000 (0:00:04.003) 0:02:22.196 **** 2026-02-04 03:18:56.204439 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:18:56.204449 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:18:56.204460 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:18:56.204471 | orchestrator | 2026-02-04 03:18:56.204497 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-04 03:18:56.204509 | orchestrator | Wednesday 04 February 2026 03:18:54 +0000 (0:00:00.509) 0:02:22.706 **** 2026-02-04 03:18:56.204523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:18:56.204558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:18:56.204582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:18:56.204595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:18:56.204607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:18:56.204623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:18:56.204636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:18:56.204655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:18:56.204676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:18:57.657225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:18:57.657328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:18:57.657361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:18:57.657375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:18:57.657387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:18:57.657419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:18:57.657432 | orchestrator | 2026-02-04 03:18:57.657446 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-04 03:18:57.657459 | orchestrator | Wednesday 04 February 2026 03:18:56 +0000 (0:00:02.466) 0:02:25.172 **** 2026-02-04 03:18:57.657470 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:18:57.657482 | orchestrator | 2026-02-04 03:18:57.657493 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-04 03:18:57.657504 | orchestrator | Wednesday 04 February 2026 03:18:56 +0000 (0:00:00.138) 0:02:25.311 **** 2026-02-04 03:18:57.657515 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:18:57.657542 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:18:57.657554 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:18:57.657565 | orchestrator | 2026-02-04 03:18:57.657576 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-04 03:18:57.657587 | orchestrator | Wednesday 04 February 2026 03:18:57 +0000 (0:00:00.309) 0:02:25.621 **** 2026-02-04 03:18:57.657599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:18:57.657612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:18:57.657631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:18:57.657653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:18:57.657664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:18:57.657676 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:18:57.657696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:19:02.331875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:19:02.331961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:19:02.331986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:19:02.332011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:19:02.332019 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:19:02.332028 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:19:02.332036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:19:02.332057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:19:02.332064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:19:02.332074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:19:02.332086 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:19:02.332093 | orchestrator | 2026-02-04 03:19:02.332100 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 03:19:02.332108 | orchestrator | Wednesday 04 February 2026 03:18:57 +0000 (0:00:00.689) 0:02:26.310 **** 2026-02-04 03:19:02.332115 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:19:02.332121 | orchestrator | 2026-02-04 03:19:02.332128 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-04 03:19:02.332134 | orchestrator | Wednesday 04 February 2026 03:18:58 +0000 (0:00:00.715) 0:02:27.026 **** 2026-02-04 03:19:02.332141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:02.332148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:02.332159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:03.866959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:03.867122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:03.867140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:03.867191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:03.867339 | orchestrator | 2026-02-04 03:19:03.867353 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-04 03:19:03.867366 | orchestrator | Wednesday 04 February 2026 03:19:03 +0000 (0:00:04.816) 0:02:31.843 **** 2026-02-04 03:19:03.867388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:19:03.973144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:19:03.973248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:19:03.973265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:19:03.973278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:19:03.973291 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:19:03.973305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:19:03.973318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:19:03.973375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:19:03.973389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:19:03.973401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:19:03.973413 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:19:03.973424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:19:03.973436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:19:03.973447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:19:03.973474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:19:04.539357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:19:04.539478 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:19:04.539507 | orchestrator | 2026-02-04 03:19:04.539526 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-04 03:19:04.539546 | orchestrator | Wednesday 04 February 2026 03:19:03 +0000 (0:00:00.691) 0:02:32.534 **** 2026-02-04 03:19:04.539567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:19:04.539591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:19:04.539610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:19:04.539674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:19:04.539728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:19:04.539752 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:19:04.539773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:19:04.539795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:19:04.539846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:19:04.539865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:19:04.539899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:19:04.539920 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:19:04.539960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 03:19:08.956315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 03:19:08.956447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 03:19:08.956465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 03:19:08.956478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 03:19:08.956515 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:19:08.956529 | orchestrator | 2026-02-04 03:19:08.956597 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-04 03:19:08.956621 | orchestrator | Wednesday 04 February 2026 03:19:05 +0000 (0:00:01.057) 0:02:33.592 **** 2026-02-04 03:19:08.956641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:08.956706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:08.956722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:08.956734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:08.956756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:08.956767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:08.956779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:08.956841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:24.545936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:24.546108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:24.546129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:24.546167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:24.546180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:24.546206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:24.546239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:24.546252 | orchestrator | 2026-02-04 03:19:24.546265 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-04 03:19:24.546279 | orchestrator | Wednesday 04 February 2026 03:19:09 +0000 (0:00:04.851) 0:02:38.443 **** 2026-02-04 03:19:24.546290 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 03:19:24.546303 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 03:19:24.546314 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-04 03:19:24.546325 | orchestrator | 2026-02-04 03:19:24.546336 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-04 03:19:24.546347 | orchestrator | Wednesday 04 February 2026 03:19:11 +0000 (0:00:01.554) 0:02:39.998 **** 2026-02-04 03:19:24.546359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:24.546381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:24.546396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:24.546434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:39.659707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:39.659909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:39.659955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:19:39.660165 | orchestrator | 2026-02-04 03:19:39.660179 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-04 03:19:39.660194 | orchestrator | Wednesday 04 February 2026 03:19:27 +0000 (0:00:16.267) 0:02:56.265 **** 2026-02-04 03:19:39.660205 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:19:39.660217 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:19:39.660228 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:19:39.660239 | orchestrator | 2026-02-04 03:19:39.660250 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-04 03:19:39.660261 | orchestrator | Wednesday 04 February 2026 03:19:29 +0000 (0:00:01.962) 0:02:58.228 **** 2026-02-04 03:19:39.660272 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 03:19:39.660283 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 03:19:39.660294 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 03:19:39.660305 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 03:19:39.660316 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 03:19:39.660326 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 03:19:39.660337 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 03:19:39.660348 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 03:19:39.660359 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 03:19:39.660374 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 03:19:39.660385 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 03:19:39.660396 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 03:19:39.660407 | orchestrator | 2026-02-04 03:19:39.660418 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-04 03:19:39.660435 | orchestrator | Wednesday 04 February 2026 03:19:34 +0000 (0:00:04.911) 0:03:03.139 **** 2026-02-04 03:19:39.660447 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 03:19:39.660458 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 03:19:39.660476 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 03:19:47.514175 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 03:19:47.514287 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 03:19:47.514303 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 03:19:47.514315 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 03:19:47.514326 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 03:19:47.514337 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 03:19:47.514348 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 03:19:47.514359 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 03:19:47.514370 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 03:19:47.514381 | orchestrator | 2026-02-04 03:19:47.514393 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-04 03:19:47.514406 | orchestrator | Wednesday 04 February 2026 03:19:39 +0000 (0:00:05.072) 0:03:08.211 **** 2026-02-04 03:19:47.514417 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-04 03:19:47.514428 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-04 03:19:47.514439 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-04 03:19:47.514450 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-04 03:19:47.514461 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-04 03:19:47.514472 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-04 03:19:47.514483 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-04 03:19:47.514494 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-04 03:19:47.514505 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-04 03:19:47.514516 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-04 03:19:47.514527 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-04 03:19:47.514537 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-04 03:19:47.514548 | orchestrator | 2026-02-04 03:19:47.514559 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-04 03:19:47.514570 | orchestrator | Wednesday 04 February 2026 03:19:44 +0000 (0:00:04.953) 0:03:13.165 **** 2026-02-04 03:19:47.514585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:47.514618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:47.514686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 03:19:47.514703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:47.514717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:47.514730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-04 03:19:47.514744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:47.514757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:47.514784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-04 03:19:47.514861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:21:13.938690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:21:13.938857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-04 03:21:13.938877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:13.938888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:13.938919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:13.938930 | orchestrator | 2026-02-04 03:21:13.938954 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-04 03:21:13.938965 | orchestrator | Wednesday 04 February 2026 03:19:48 +0000 (0:00:03.496) 0:03:16.661 **** 2026-02-04 03:21:13.938974 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:13.938984 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:13.938992 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:13.939001 | orchestrator | 2026-02-04 03:21:13.939010 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-04 03:21:13.939018 | orchestrator | Wednesday 04 February 2026 03:19:48 +0000 (0:00:00.506) 0:03:17.167 **** 2026-02-04 03:21:13.939027 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939036 | orchestrator | 2026-02-04 03:21:13.939044 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-04 03:21:13.939053 | orchestrator | Wednesday 04 February 2026 03:19:50 +0000 (0:00:02.116) 0:03:19.284 **** 2026-02-04 03:21:13.939062 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939070 | orchestrator | 2026-02-04 03:21:13.939079 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-04 03:21:13.939088 | orchestrator | Wednesday 04 February 2026 03:19:52 +0000 (0:00:02.092) 0:03:21.376 **** 2026-02-04 03:21:13.939099 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939111 | orchestrator | 2026-02-04 03:21:13.939122 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-04 03:21:13.939133 | orchestrator | Wednesday 04 February 2026 03:19:54 +0000 (0:00:02.157) 0:03:23.533 **** 2026-02-04 03:21:13.939160 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939172 | orchestrator | 2026-02-04 03:21:13.939183 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-04 03:21:13.939194 | orchestrator | Wednesday 04 February 2026 03:19:57 +0000 (0:00:02.258) 0:03:25.792 **** 2026-02-04 03:21:13.939205 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939215 | orchestrator | 2026-02-04 03:21:13.939228 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 03:21:13.939240 | orchestrator | Wednesday 04 February 2026 03:20:18 +0000 (0:00:20.823) 0:03:46.615 **** 2026-02-04 03:21:13.939253 | orchestrator | 2026-02-04 03:21:13.939265 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 03:21:13.939278 | orchestrator | Wednesday 04 February 2026 03:20:18 +0000 (0:00:00.067) 0:03:46.683 **** 2026-02-04 03:21:13.939290 | orchestrator | 2026-02-04 03:21:13.939303 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-04 03:21:13.939316 | orchestrator | Wednesday 04 February 2026 03:20:18 +0000 (0:00:00.063) 0:03:46.746 **** 2026-02-04 03:21:13.939328 | orchestrator | 2026-02-04 03:21:13.939340 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-04 03:21:13.939352 | orchestrator | Wednesday 04 February 2026 03:20:18 +0000 (0:00:00.068) 0:03:46.814 **** 2026-02-04 03:21:13.939374 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939385 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:21:13.939396 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:21:13.939408 | orchestrator | 2026-02-04 03:21:13.939419 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-04 03:21:13.939429 | orchestrator | Wednesday 04 February 2026 03:20:33 +0000 (0:00:15.510) 0:04:02.325 **** 2026-02-04 03:21:13.939440 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:21:13.939451 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939462 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:21:13.939473 | orchestrator | 2026-02-04 03:21:13.939484 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-04 03:21:13.939495 | orchestrator | Wednesday 04 February 2026 03:20:44 +0000 (0:00:10.412) 0:04:12.738 **** 2026-02-04 03:21:13.939506 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939517 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:21:13.939528 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:21:13.939539 | orchestrator | 2026-02-04 03:21:13.939550 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-04 03:21:13.939560 | orchestrator | Wednesday 04 February 2026 03:20:53 +0000 (0:00:09.714) 0:04:22.453 **** 2026-02-04 03:21:13.939571 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:21:13.939582 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939593 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:21:13.939604 | orchestrator | 2026-02-04 03:21:13.939615 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-04 03:21:13.939626 | orchestrator | Wednesday 04 February 2026 03:21:03 +0000 (0:00:09.750) 0:04:32.203 **** 2026-02-04 03:21:13.939637 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:21:13.939648 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:21:13.939659 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:21:13.939670 | orchestrator | 2026-02-04 03:21:13.939680 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:21:13.939693 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 03:21:13.939705 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 03:21:13.939717 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 03:21:13.939728 | orchestrator | 2026-02-04 03:21:13.939739 | orchestrator | 2026-02-04 03:21:13.939750 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:21:13.939761 | orchestrator | Wednesday 04 February 2026 03:21:13 +0000 (0:00:10.280) 0:04:42.483 **** 2026-02-04 03:21:13.939771 | orchestrator | =============================================================================== 2026-02-04 03:21:13.939782 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 20.82s 2026-02-04 03:21:13.939825 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.27s 2026-02-04 03:21:13.939855 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.51s 2026-02-04 03:21:13.939867 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.22s 2026-02-04 03:21:13.939878 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.60s 2026-02-04 03:21:13.939889 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.41s 2026-02-04 03:21:13.939900 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.28s 2026-02-04 03:21:13.939911 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.97s 2026-02-04 03:21:13.939921 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 9.75s 2026-02-04 03:21:13.939932 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 9.71s 2026-02-04 03:21:13.939950 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.92s 2026-02-04 03:21:13.939962 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.06s 2026-02-04 03:21:13.939972 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.91s 2026-02-04 03:21:13.939983 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.28s 2026-02-04 03:21:13.940001 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.07s 2026-02-04 03:21:14.265055 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.02s 2026-02-04 03:21:14.265157 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 4.95s 2026-02-04 03:21:14.265173 | orchestrator | octavia : Gather facts -------------------------------------------------- 4.92s 2026-02-04 03:21:14.265185 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 4.91s 2026-02-04 03:21:14.265196 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 4.88s 2026-02-04 03:21:16.674267 | orchestrator | 2026-02-04 03:21:16 | INFO  | Task 8f3fb573-2d1b-47db-923b-85d9f3500d54 (ceilometer) was prepared for execution. 2026-02-04 03:21:16.674382 | orchestrator | 2026-02-04 03:21:16 | INFO  | It takes a moment until task 8f3fb573-2d1b-47db-923b-85d9f3500d54 (ceilometer) has been started and output is visible here. 2026-02-04 03:21:39.576498 | orchestrator | 2026-02-04 03:21:39.576635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:21:39.576655 | orchestrator | 2026-02-04 03:21:39.576668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:21:39.576680 | orchestrator | Wednesday 04 February 2026 03:21:20 +0000 (0:00:00.265) 0:00:00.265 **** 2026-02-04 03:21:39.576692 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:21:39.576704 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:21:39.576715 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:21:39.576726 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:21:39.576737 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:21:39.576748 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:21:39.576759 | orchestrator | 2026-02-04 03:21:39.576770 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:21:39.576781 | orchestrator | Wednesday 04 February 2026 03:21:21 +0000 (0:00:00.724) 0:00:00.990 **** 2026-02-04 03:21:39.576793 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-04 03:21:39.576870 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-04 03:21:39.576891 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-04 03:21:39.576909 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-04 03:21:39.576926 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-04 03:21:39.576937 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-04 03:21:39.576948 | orchestrator | 2026-02-04 03:21:39.576960 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-04 03:21:39.576971 | orchestrator | 2026-02-04 03:21:39.576982 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-04 03:21:39.576993 | orchestrator | Wednesday 04 February 2026 03:21:22 +0000 (0:00:00.692) 0:00:01.683 **** 2026-02-04 03:21:39.577006 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 03:21:39.577018 | orchestrator | 2026-02-04 03:21:39.577029 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-04 03:21:39.577043 | orchestrator | Wednesday 04 February 2026 03:21:23 +0000 (0:00:01.221) 0:00:02.904 **** 2026-02-04 03:21:39.577056 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:39.577070 | orchestrator | 2026-02-04 03:21:39.577083 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-04 03:21:39.577121 | orchestrator | Wednesday 04 February 2026 03:21:23 +0000 (0:00:00.117) 0:00:03.022 **** 2026-02-04 03:21:39.577135 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:39.577148 | orchestrator | 2026-02-04 03:21:39.577161 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-04 03:21:39.577174 | orchestrator | Wednesday 04 February 2026 03:21:23 +0000 (0:00:00.130) 0:00:03.153 **** 2026-02-04 03:21:39.577186 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:21:39.577199 | orchestrator | 2026-02-04 03:21:39.577211 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-04 03:21:39.577224 | orchestrator | Wednesday 04 February 2026 03:21:27 +0000 (0:00:03.347) 0:00:06.500 **** 2026-02-04 03:21:39.577236 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:21:39.577249 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-04 03:21:39.577261 | orchestrator | 2026-02-04 03:21:39.577290 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-04 03:21:39.577303 | orchestrator | Wednesday 04 February 2026 03:21:30 +0000 (0:00:03.812) 0:00:10.312 **** 2026-02-04 03:21:39.577315 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:21:39.577328 | orchestrator | 2026-02-04 03:21:39.577340 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-04 03:21:39.577352 | orchestrator | Wednesday 04 February 2026 03:21:33 +0000 (0:00:03.041) 0:00:13.353 **** 2026-02-04 03:21:39.577365 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-04 03:21:39.577379 | orchestrator | 2026-02-04 03:21:39.577391 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-04 03:21:39.577404 | orchestrator | Wednesday 04 February 2026 03:21:37 +0000 (0:00:03.941) 0:00:17.295 **** 2026-02-04 03:21:39.577417 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:39.577429 | orchestrator | 2026-02-04 03:21:39.577440 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-04 03:21:39.577451 | orchestrator | Wednesday 04 February 2026 03:21:38 +0000 (0:00:00.116) 0:00:17.411 **** 2026-02-04 03:21:39.577465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:39.577499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:39.577513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:39.577534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:39.577546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:21:39.577565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:21:39.577577 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:39.577597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:21:44.245380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:44.245492 | orchestrator | 2026-02-04 03:21:44.245505 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-04 03:21:44.245514 | orchestrator | Wednesday 04 February 2026 03:21:39 +0000 (0:00:01.528) 0:00:18.940 **** 2026-02-04 03:21:44.245522 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:21:44.245530 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 03:21:44.245537 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 03:21:44.245545 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 03:21:44.245552 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 03:21:44.245559 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 03:21:44.245566 | orchestrator | 2026-02-04 03:21:44.245574 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-04 03:21:44.245582 | orchestrator | Wednesday 04 February 2026 03:21:41 +0000 (0:00:01.509) 0:00:20.450 **** 2026-02-04 03:21:44.245589 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:21:44.245598 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:21:44.245605 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:21:44.245612 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:21:44.245619 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:21:44.245626 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:21:44.245634 | orchestrator | 2026-02-04 03:21:44.245641 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-04 03:21:44.245648 | orchestrator | Wednesday 04 February 2026 03:21:41 +0000 (0:00:00.618) 0:00:21.068 **** 2026-02-04 03:21:44.245656 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:44.245664 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:44.245671 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:44.245678 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:44.245685 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:44.245693 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:44.245700 | orchestrator | 2026-02-04 03:21:44.245707 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-04 03:21:44.245715 | orchestrator | Wednesday 04 February 2026 03:21:42 +0000 (0:00:00.780) 0:00:21.849 **** 2026-02-04 03:21:44.245722 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:21:44.245730 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:21:44.245737 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:21:44.245744 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:21:44.245751 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:21:44.245759 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:21:44.245766 | orchestrator | 2026-02-04 03:21:44.245858 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-04 03:21:44.245870 | orchestrator | Wednesday 04 February 2026 03:21:43 +0000 (0:00:00.650) 0:00:22.500 **** 2026-02-04 03:21:44.245880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:44.245889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:44.245905 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:44.245931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:44.245940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:44.245949 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:44.245959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:44.245979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:44.245989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:44.245998 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:44.246007 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:44.246061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:44.246077 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:44.246092 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:48.825449 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:48.825584 | orchestrator | 2026-02-04 03:21:48.825603 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-04 03:21:48.825617 | orchestrator | Wednesday 04 February 2026 03:21:44 +0000 (0:00:01.111) 0:00:23.612 **** 2026-02-04 03:21:48.825631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:48.825647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:48.825702 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:48.825733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:48.825746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:48.825781 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:48.825794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:48.825840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:48.825852 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:48.825882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:48.825895 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:48.825907 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:48.825918 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:48.825935 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:48.825955 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:48.825967 | orchestrator | 2026-02-04 03:21:48.825980 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-04 03:21:48.825993 | orchestrator | Wednesday 04 February 2026 03:21:45 +0000 (0:00:00.813) 0:00:24.425 **** 2026-02-04 03:21:48.826004 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:21:48.826064 | orchestrator | 2026-02-04 03:21:48.826077 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-04 03:21:48.826088 | orchestrator | Wednesday 04 February 2026 03:21:45 +0000 (0:00:00.680) 0:00:25.105 **** 2026-02-04 03:21:48.826100 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:21:48.826112 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:21:48.826122 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:21:48.826133 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:21:48.826144 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:21:48.826155 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:21:48.826165 | orchestrator | 2026-02-04 03:21:48.826186 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-04 03:21:48.826197 | orchestrator | Wednesday 04 February 2026 03:21:46 +0000 (0:00:00.780) 0:00:25.886 **** 2026-02-04 03:21:48.826208 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:21:48.826219 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:21:48.826229 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:21:48.826240 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:21:48.826250 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:21:48.826261 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:21:48.826272 | orchestrator | 2026-02-04 03:21:48.826283 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-04 03:21:48.826294 | orchestrator | Wednesday 04 February 2026 03:21:47 +0000 (0:00:00.948) 0:00:26.834 **** 2026-02-04 03:21:48.826305 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:48.826316 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:48.826327 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:48.826338 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:48.826348 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:48.826359 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:48.826370 | orchestrator | 2026-02-04 03:21:48.826381 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-04 03:21:48.826392 | orchestrator | Wednesday 04 February 2026 03:21:48 +0000 (0:00:00.767) 0:00:27.602 **** 2026-02-04 03:21:48.826403 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:48.826414 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:48.826425 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:48.826435 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:48.826446 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:48.826457 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:48.826468 | orchestrator | 2026-02-04 03:21:53.901252 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-04 03:21:53.901375 | orchestrator | Wednesday 04 February 2026 03:21:48 +0000 (0:00:00.601) 0:00:28.204 **** 2026-02-04 03:21:53.901392 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:21:53.901405 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 03:21:53.901416 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 03:21:53.901427 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 03:21:53.902226 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 03:21:53.902255 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 03:21:53.902267 | orchestrator | 2026-02-04 03:21:53.902279 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-04 03:21:53.902290 | orchestrator | Wednesday 04 February 2026 03:21:50 +0000 (0:00:01.500) 0:00:29.704 **** 2026-02-04 03:21:53.902330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:53.902362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:53.902375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:53.902386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:53.902398 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:53.902409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:53.902442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:53.902463 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:53.902476 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:53.902487 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:53.902498 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:53.902515 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:53.902527 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:53.902538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:53.902549 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:53.902560 | orchestrator | 2026-02-04 03:21:53.902572 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-04 03:21:53.902583 | orchestrator | Wednesday 04 February 2026 03:21:51 +0000 (0:00:00.876) 0:00:30.581 **** 2026-02-04 03:21:53.902594 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:53.902604 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:53.902615 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:53.902626 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:53.902636 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:53.902647 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:53.902658 | orchestrator | 2026-02-04 03:21:53.902669 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-04 03:21:53.902679 | orchestrator | Wednesday 04 February 2026 03:21:51 +0000 (0:00:00.800) 0:00:31.381 **** 2026-02-04 03:21:53.902690 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:21:53.902701 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 03:21:53.902712 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 03:21:53.902722 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 03:21:53.902733 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 03:21:53.902744 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 03:21:53.902755 | orchestrator | 2026-02-04 03:21:53.902766 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-04 03:21:53.902784 | orchestrator | Wednesday 04 February 2026 03:21:53 +0000 (0:00:01.360) 0:00:32.742 **** 2026-02-04 03:21:53.902832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:59.540981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:59.541120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:59.541140 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:59.541171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:59.541184 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:59.541197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:21:59.541209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:21:59.541244 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:59.541257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:59.541269 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:59.541299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:59.541312 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:59.541324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:21:59.541336 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:59.541347 | orchestrator | 2026-02-04 03:21:59.541365 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-04 03:21:59.541378 | orchestrator | Wednesday 04 February 2026 03:21:54 +0000 (0:00:01.107) 0:00:33.849 **** 2026-02-04 03:21:59.541389 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:59.541401 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:59.541412 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:59.541423 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:59.541434 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:59.541445 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:59.541457 | orchestrator | 2026-02-04 03:21:59.541468 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-04 03:21:59.541480 | orchestrator | Wednesday 04 February 2026 03:21:55 +0000 (0:00:00.772) 0:00:34.622 **** 2026-02-04 03:21:59.541491 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:59.541502 | orchestrator | 2026-02-04 03:21:59.541516 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-04 03:21:59.541530 | orchestrator | Wednesday 04 February 2026 03:21:55 +0000 (0:00:00.148) 0:00:34.771 **** 2026-02-04 03:21:59.541544 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:21:59.541557 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:21:59.541570 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:21:59.541584 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:21:59.541606 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:21:59.541619 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:21:59.541632 | orchestrator | 2026-02-04 03:21:59.541645 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-04 03:21:59.541658 | orchestrator | Wednesday 04 February 2026 03:21:56 +0000 (0:00:00.644) 0:00:35.416 **** 2026-02-04 03:21:59.541672 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 03:21:59.541687 | orchestrator | 2026-02-04 03:21:59.541701 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-04 03:21:59.541714 | orchestrator | Wednesday 04 February 2026 03:21:57 +0000 (0:00:01.287) 0:00:36.703 **** 2026-02-04 03:21:59.541729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:21:59.541752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:00.075443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:00.075548 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:00.075583 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:00.075620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:00.075634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:00.075647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:00.075681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:00.075693 | orchestrator | 2026-02-04 03:22:00.075707 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-04 03:22:00.075720 | orchestrator | Wednesday 04 February 2026 03:21:59 +0000 (0:00:02.210) 0:00:38.913 **** 2026-02-04 03:22:00.075733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:00.075750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:00.075771 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:22:00.075785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:00.075797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:00.075876 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:22:00.075888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:00.075909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:02.207065 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:22:02.207160 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:02.207176 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:02.207221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:02.207232 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:02.207240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:02.207248 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:02.207256 | orchestrator | 2026-02-04 03:22:02.207266 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-04 03:22:02.207276 | orchestrator | Wednesday 04 February 2026 03:22:00 +0000 (0:00:00.872) 0:00:39.785 **** 2026-02-04 03:22:02.207284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:02.207294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:02.207303 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:22:02.207326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:02.207340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:02.207355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:02.207363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:02.207372 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:22:02.207381 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:22:02.207389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:02.207397 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:02.207405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:02.207414 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:02.207430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:09.629179 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:09.629292 | orchestrator | 2026-02-04 03:22:09.629310 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-04 03:22:09.629324 | orchestrator | Wednesday 04 February 2026 03:22:02 +0000 (0:00:01.790) 0:00:41.575 **** 2026-02-04 03:22:09.629355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:09.629373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:09.629385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:09.629398 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:09.629412 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:09.629465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:09.629485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:09.629498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:09.629509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:09.629521 | orchestrator | 2026-02-04 03:22:09.629532 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-04 03:22:09.629543 | orchestrator | Wednesday 04 February 2026 03:22:04 +0000 (0:00:02.442) 0:00:44.018 **** 2026-02-04 03:22:09.629555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:09.629567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:09.629593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:18.792439 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:18.792549 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:18.792566 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:18.792579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:18.792592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:18.792627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:18.792641 | orchestrator | 2026-02-04 03:22:18.792655 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-04 03:22:18.792684 | orchestrator | Wednesday 04 February 2026 03:22:09 +0000 (0:00:04.984) 0:00:49.002 **** 2026-02-04 03:22:18.792697 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:22:18.792709 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 03:22:18.792720 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 03:22:18.792731 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 03:22:18.792742 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 03:22:18.792752 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 03:22:18.792763 | orchestrator | 2026-02-04 03:22:18.792774 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-04 03:22:18.792785 | orchestrator | Wednesday 04 February 2026 03:22:11 +0000 (0:00:01.522) 0:00:50.525 **** 2026-02-04 03:22:18.792796 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:22:18.792893 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:22:18.792906 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:22:18.792917 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:18.792928 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:18.792939 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:18.792950 | orchestrator | 2026-02-04 03:22:18.792961 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-04 03:22:18.792975 | orchestrator | Wednesday 04 February 2026 03:22:11 +0000 (0:00:00.658) 0:00:51.183 **** 2026-02-04 03:22:18.792987 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:18.792999 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:18.793012 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:18.793025 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:22:18.793037 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:22:18.793050 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:22:18.793063 | orchestrator | 2026-02-04 03:22:18.793075 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-04 03:22:18.793088 | orchestrator | Wednesday 04 February 2026 03:22:13 +0000 (0:00:01.625) 0:00:52.808 **** 2026-02-04 03:22:18.793100 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:18.793112 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:18.793125 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:18.793138 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:22:18.793150 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:22:18.793163 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:22:18.793175 | orchestrator | 2026-02-04 03:22:18.793188 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-04 03:22:18.793200 | orchestrator | Wednesday 04 February 2026 03:22:14 +0000 (0:00:01.321) 0:00:54.130 **** 2026-02-04 03:22:18.793212 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:22:18.793224 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 03:22:18.793237 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 03:22:18.793260 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 03:22:18.793273 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 03:22:18.793286 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 03:22:18.793298 | orchestrator | 2026-02-04 03:22:18.793310 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-04 03:22:18.793321 | orchestrator | Wednesday 04 February 2026 03:22:16 +0000 (0:00:01.516) 0:00:55.647 **** 2026-02-04 03:22:18.793333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:18.793347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:18.793358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:18.793378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:19.607022 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:19.607125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:19.607168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:19.607182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:19.607194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:19.607206 | orchestrator | 2026-02-04 03:22:19.607220 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-04 03:22:19.607233 | orchestrator | Wednesday 04 February 2026 03:22:18 +0000 (0:00:02.513) 0:00:58.160 **** 2026-02-04 03:22:19.607245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:19.607290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:19.607312 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:22:19.607325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:19.607348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:19.607359 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:22:19.607371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:19.607382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:19.607393 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:22:19.607405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:19.607416 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:19.607435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:23.042360 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:23.042489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:23.042510 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:23.042523 | orchestrator | 2026-02-04 03:22:23.042535 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-04 03:22:23.042548 | orchestrator | Wednesday 04 February 2026 03:22:19 +0000 (0:00:00.821) 0:00:58.982 **** 2026-02-04 03:22:23.042559 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:22:23.042570 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:22:23.042581 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:22:23.042592 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:23.042603 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:23.042614 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:23.042626 | orchestrator | 2026-02-04 03:22:23.042637 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-04 03:22:23.042648 | orchestrator | Wednesday 04 February 2026 03:22:20 +0000 (0:00:00.768) 0:00:59.750 **** 2026-02-04 03:22:23.042661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:23.042677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:23.042698 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:22:23.042717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:23.042769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:23.042791 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:22:23.042862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-04 03:22:23.042877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 03:22:23.042891 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:22:23.042905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:23.042918 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:23.042932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:23.042945 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:23.042958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-04 03:22:23.042980 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:23.042993 | orchestrator | 2026-02-04 03:22:23.043006 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-04 03:22:23.043019 | orchestrator | Wednesday 04 February 2026 03:22:21 +0000 (0:00:00.884) 0:01:00.634 **** 2026-02-04 03:22:23.043042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:59.631541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:59.631676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:59.631703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:59.631717 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:59.631859 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-04 03:22:59.631878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:59.631924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:59.631938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-04 03:22:59.631950 | orchestrator | 2026-02-04 03:22:59.631963 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-04 03:22:59.631976 | orchestrator | Wednesday 04 February 2026 03:22:23 +0000 (0:00:01.781) 0:01:02.416 **** 2026-02-04 03:22:59.631987 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:22:59.632000 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:22:59.632011 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:22:59.632022 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:22:59.632032 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:22:59.632043 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:22:59.632054 | orchestrator | 2026-02-04 03:22:59.632066 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-04 03:22:59.632079 | orchestrator | Wednesday 04 February 2026 03:22:23 +0000 (0:00:00.623) 0:01:03.040 **** 2026-02-04 03:22:59.632092 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:22:59.632104 | orchestrator | 2026-02-04 03:22:59.632118 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-04 03:22:59.632140 | orchestrator | Wednesday 04 February 2026 03:22:28 +0000 (0:00:04.882) 0:01:07.922 **** 2026-02-04 03:22:59.632153 | orchestrator | 2026-02-04 03:22:59.632165 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-04 03:22:59.632178 | orchestrator | Wednesday 04 February 2026 03:22:28 +0000 (0:00:00.077) 0:01:08.000 **** 2026-02-04 03:22:59.632190 | orchestrator | 2026-02-04 03:22:59.632203 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-04 03:22:59.632216 | orchestrator | Wednesday 04 February 2026 03:22:28 +0000 (0:00:00.088) 0:01:08.089 **** 2026-02-04 03:22:59.632229 | orchestrator | 2026-02-04 03:22:59.632243 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-04 03:22:59.632256 | orchestrator | Wednesday 04 February 2026 03:22:28 +0000 (0:00:00.244) 0:01:08.333 **** 2026-02-04 03:22:59.632266 | orchestrator | 2026-02-04 03:22:59.632277 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-04 03:22:59.632288 | orchestrator | Wednesday 04 February 2026 03:22:29 +0000 (0:00:00.067) 0:01:08.400 **** 2026-02-04 03:22:59.632298 | orchestrator | 2026-02-04 03:22:59.632309 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-04 03:22:59.632320 | orchestrator | Wednesday 04 February 2026 03:22:29 +0000 (0:00:00.070) 0:01:08.471 **** 2026-02-04 03:22:59.632331 | orchestrator | 2026-02-04 03:22:59.632342 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-04 03:22:59.632352 | orchestrator | Wednesday 04 February 2026 03:22:29 +0000 (0:00:00.071) 0:01:08.542 **** 2026-02-04 03:22:59.632363 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:22:59.632374 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:22:59.632385 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:22:59.632396 | orchestrator | 2026-02-04 03:22:59.632406 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-04 03:22:59.632417 | orchestrator | Wednesday 04 February 2026 03:22:39 +0000 (0:00:10.230) 0:01:18.773 **** 2026-02-04 03:22:59.632428 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:22:59.632439 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:22:59.632450 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:22:59.632461 | orchestrator | 2026-02-04 03:22:59.632472 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-04 03:22:59.632482 | orchestrator | Wednesday 04 February 2026 03:22:48 +0000 (0:00:09.095) 0:01:27.868 **** 2026-02-04 03:22:59.632493 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:22:59.632504 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:22:59.632515 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:22:59.632526 | orchestrator | 2026-02-04 03:22:59.632536 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:22:59.632549 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-04 03:22:59.632561 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 03:22:59.632579 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 03:23:00.113244 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-04 03:23:00.113347 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-04 03:23:00.113411 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-04 03:23:00.113424 | orchestrator | 2026-02-04 03:23:00.113437 | orchestrator | 2026-02-04 03:23:00.113448 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:23:00.113515 | orchestrator | Wednesday 04 February 2026 03:22:59 +0000 (0:00:11.123) 0:01:38.992 **** 2026-02-04 03:23:00.113528 | orchestrator | =============================================================================== 2026-02-04 03:23:00.113539 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.12s 2026-02-04 03:23:00.113550 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.23s 2026-02-04 03:23:00.113561 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.10s 2026-02-04 03:23:00.113572 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.98s 2026-02-04 03:23:00.113583 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.88s 2026-02-04 03:23:00.113594 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.94s 2026-02-04 03:23:00.113617 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.81s 2026-02-04 03:23:00.113628 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.35s 2026-02-04 03:23:00.113639 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.04s 2026-02-04 03:23:00.113650 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.51s 2026-02-04 03:23:00.113661 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.44s 2026-02-04 03:23:00.113672 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.21s 2026-02-04 03:23:00.113683 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.79s 2026-02-04 03:23:00.113694 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.78s 2026-02-04 03:23:00.113705 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.63s 2026-02-04 03:23:00.113717 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.53s 2026-02-04 03:23:00.113728 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.52s 2026-02-04 03:23:00.113739 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.52s 2026-02-04 03:23:00.113750 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.51s 2026-02-04 03:23:00.113762 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.50s 2026-02-04 03:23:02.626645 | orchestrator | 2026-02-04 03:23:02 | INFO  | Task 5f3a432e-e550-4a02-8326-26cd5a7e643d (aodh) was prepared for execution. 2026-02-04 03:23:02.626748 | orchestrator | 2026-02-04 03:23:02 | INFO  | It takes a moment until task 5f3a432e-e550-4a02-8326-26cd5a7e643d (aodh) has been started and output is visible here. 2026-02-04 03:23:33.539789 | orchestrator | 2026-02-04 03:23:33.539954 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:23:33.539973 | orchestrator | 2026-02-04 03:23:33.539985 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:23:33.539997 | orchestrator | Wednesday 04 February 2026 03:23:06 +0000 (0:00:00.258) 0:00:00.258 **** 2026-02-04 03:23:33.540008 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:23:33.540021 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:23:33.540032 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:23:33.540043 | orchestrator | 2026-02-04 03:23:33.540054 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:23:33.540065 | orchestrator | Wednesday 04 February 2026 03:23:07 +0000 (0:00:00.325) 0:00:00.583 **** 2026-02-04 03:23:33.540076 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-04 03:23:33.540088 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-04 03:23:33.540098 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-04 03:23:33.540109 | orchestrator | 2026-02-04 03:23:33.540120 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-04 03:23:33.540131 | orchestrator | 2026-02-04 03:23:33.540142 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-04 03:23:33.540177 | orchestrator | Wednesday 04 February 2026 03:23:07 +0000 (0:00:00.450) 0:00:01.034 **** 2026-02-04 03:23:33.540189 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:23:33.540200 | orchestrator | 2026-02-04 03:23:33.540211 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-04 03:23:33.540223 | orchestrator | Wednesday 04 February 2026 03:23:08 +0000 (0:00:00.608) 0:00:01.642 **** 2026-02-04 03:23:33.540236 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-04 03:23:33.540254 | orchestrator | 2026-02-04 03:23:33.540269 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-04 03:23:33.540280 | orchestrator | Wednesday 04 February 2026 03:23:11 +0000 (0:00:03.171) 0:00:04.814 **** 2026-02-04 03:23:33.540291 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-04 03:23:33.540302 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-04 03:23:33.540313 | orchestrator | 2026-02-04 03:23:33.540324 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-04 03:23:33.540337 | orchestrator | Wednesday 04 February 2026 03:23:17 +0000 (0:00:06.090) 0:00:10.904 **** 2026-02-04 03:23:33.540350 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:23:33.540364 | orchestrator | 2026-02-04 03:23:33.540378 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-04 03:23:33.540390 | orchestrator | Wednesday 04 February 2026 03:23:20 +0000 (0:00:03.359) 0:00:14.264 **** 2026-02-04 03:23:33.540403 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:23:33.540415 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-04 03:23:33.540428 | orchestrator | 2026-02-04 03:23:33.540441 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-04 03:23:33.540453 | orchestrator | Wednesday 04 February 2026 03:23:24 +0000 (0:00:03.803) 0:00:18.067 **** 2026-02-04 03:23:33.540466 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:23:33.540479 | orchestrator | 2026-02-04 03:23:33.540491 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-04 03:23:33.540503 | orchestrator | Wednesday 04 February 2026 03:23:27 +0000 (0:00:03.184) 0:00:21.251 **** 2026-02-04 03:23:33.540516 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-04 03:23:33.540528 | orchestrator | 2026-02-04 03:23:33.540541 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-04 03:23:33.540554 | orchestrator | Wednesday 04 February 2026 03:23:31 +0000 (0:00:03.671) 0:00:24.923 **** 2026-02-04 03:23:33.540570 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:33.540608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:33.540630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:33.540643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:33.540655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:33.540668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:33.540679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:33.540699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:34.857771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:34.857884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:34.857895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:34.857903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:34.857910 | orchestrator | 2026-02-04 03:23:34.857918 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-04 03:23:34.857927 | orchestrator | Wednesday 04 February 2026 03:23:33 +0000 (0:00:01.964) 0:00:26.888 **** 2026-02-04 03:23:34.857934 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:23:34.857941 | orchestrator | 2026-02-04 03:23:34.857948 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-04 03:23:34.857955 | orchestrator | Wednesday 04 February 2026 03:23:33 +0000 (0:00:00.149) 0:00:27.037 **** 2026-02-04 03:23:34.857962 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:23:34.857970 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:23:34.857977 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:23:34.857984 | orchestrator | 2026-02-04 03:23:34.857992 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-04 03:23:34.857999 | orchestrator | Wednesday 04 February 2026 03:23:34 +0000 (0:00:00.515) 0:00:27.553 **** 2026-02-04 03:23:34.858007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:34.858095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:34.858105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:34.858112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:34.858119 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:23:34.858127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:34.858135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:34.858148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:34.858161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:39.706307 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:23:39.706412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:39.706429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:39.706440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:39.706450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:39.706480 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:23:39.706490 | orchestrator | 2026-02-04 03:23:39.706500 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-04 03:23:39.706510 | orchestrator | Wednesday 04 February 2026 03:23:34 +0000 (0:00:00.651) 0:00:28.205 **** 2026-02-04 03:23:39.706519 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:23:39.706528 | orchestrator | 2026-02-04 03:23:39.706537 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-04 03:23:39.706546 | orchestrator | Wednesday 04 February 2026 03:23:35 +0000 (0:00:00.779) 0:00:28.984 **** 2026-02-04 03:23:39.706555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:39.706581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:39.706591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:39.706600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:39.706616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:39.706626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:39.706635 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:39.706651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:40.336599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:40.336697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:40.336714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:40.336753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:40.336768 | orchestrator | 2026-02-04 03:23:40.336783 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-04 03:23:40.336795 | orchestrator | Wednesday 04 February 2026 03:23:39 +0000 (0:00:04.068) 0:00:33.053 **** 2026-02-04 03:23:40.336844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:40.336859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:40.336889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:40.336902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:40.336914 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:23:40.336927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:40.336948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:40.336990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:40.337002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:40.337013 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:23:40.337031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:41.354266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:41.354370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:41.354380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:41.354389 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:23:41.354398 | orchestrator | 2026-02-04 03:23:41.354406 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-04 03:23:41.354414 | orchestrator | Wednesday 04 February 2026 03:23:40 +0000 (0:00:00.634) 0:00:33.687 **** 2026-02-04 03:23:41.354422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:41.354431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:41.354438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:41.354459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:41.354471 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:23:41.354478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:41.354486 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:41.354493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:41.354500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:41.354508 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:23:41.354520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-04 03:23:45.372030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 03:23:45.372142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 03:23:45.372159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 03:23:45.372174 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:23:45.372188 | orchestrator | 2026-02-04 03:23:45.372201 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-04 03:23:45.372213 | orchestrator | Wednesday 04 February 2026 03:23:41 +0000 (0:00:01.021) 0:00:34.708 **** 2026-02-04 03:23:45.372226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:45.372247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:45.372293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:45.372347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:45.372363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:45.372375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:45.372387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:45.372398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:45.372410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:45.372438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571280 | orchestrator | 2026-02-04 03:23:53.571293 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-04 03:23:53.571305 | orchestrator | Wednesday 04 February 2026 03:23:45 +0000 (0:00:04.013) 0:00:38.721 **** 2026-02-04 03:23:53.571316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:53.571328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:53.571361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:53.571387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:53.571474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584500 | orchestrator | 2026-02-04 03:23:58.584524 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-04 03:23:58.584541 | orchestrator | Wednesday 04 February 2026 03:23:53 +0000 (0:00:08.196) 0:00:46.918 **** 2026-02-04 03:23:58.584557 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:23:58.584574 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:23:58.584589 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:23:58.584604 | orchestrator | 2026-02-04 03:23:58.584619 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-04 03:23:58.584635 | orchestrator | Wednesday 04 February 2026 03:23:55 +0000 (0:00:01.753) 0:00:48.672 **** 2026-02-04 03:23:58.584651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:58.584707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:58.584726 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-04 03:23:58.584763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:23:58.584918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:24:44.955552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-04 03:24:44.955675 | orchestrator | 2026-02-04 03:24:44.955691 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-04 03:24:44.955701 | orchestrator | Wednesday 04 February 2026 03:23:58 +0000 (0:00:03.259) 0:00:51.932 **** 2026-02-04 03:24:44.955709 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:24:44.955718 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:24:44.955725 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:24:44.955733 | orchestrator | 2026-02-04 03:24:44.955740 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-04 03:24:44.955747 | orchestrator | Wednesday 04 February 2026 03:23:58 +0000 (0:00:00.323) 0:00:52.255 **** 2026-02-04 03:24:44.955776 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:24:44.955784 | orchestrator | 2026-02-04 03:24:44.955794 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-04 03:24:44.955808 | orchestrator | Wednesday 04 February 2026 03:24:01 +0000 (0:00:02.115) 0:00:54.370 **** 2026-02-04 03:24:44.955894 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:24:44.955908 | orchestrator | 2026-02-04 03:24:44.955922 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-04 03:24:44.955935 | orchestrator | Wednesday 04 February 2026 03:24:03 +0000 (0:00:02.120) 0:00:56.491 **** 2026-02-04 03:24:44.955949 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:24:44.955961 | orchestrator | 2026-02-04 03:24:44.955972 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-04 03:24:44.955980 | orchestrator | Wednesday 04 February 2026 03:24:15 +0000 (0:00:12.636) 0:01:09.128 **** 2026-02-04 03:24:44.955987 | orchestrator | 2026-02-04 03:24:44.955995 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-04 03:24:44.956002 | orchestrator | Wednesday 04 February 2026 03:24:15 +0000 (0:00:00.072) 0:01:09.200 **** 2026-02-04 03:24:44.956009 | orchestrator | 2026-02-04 03:24:44.956017 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-04 03:24:44.956024 | orchestrator | Wednesday 04 February 2026 03:24:15 +0000 (0:00:00.071) 0:01:09.272 **** 2026-02-04 03:24:44.956032 | orchestrator | 2026-02-04 03:24:44.956040 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-04 03:24:44.956048 | orchestrator | Wednesday 04 February 2026 03:24:16 +0000 (0:00:00.266) 0:01:09.538 **** 2026-02-04 03:24:44.956055 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:24:44.956063 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:24:44.956070 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:24:44.956077 | orchestrator | 2026-02-04 03:24:44.956085 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-04 03:24:44.956094 | orchestrator | Wednesday 04 February 2026 03:24:21 +0000 (0:00:05.466) 0:01:15.005 **** 2026-02-04 03:24:44.956103 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:24:44.956111 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:24:44.956120 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:24:44.956128 | orchestrator | 2026-02-04 03:24:44.956137 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-04 03:24:44.956145 | orchestrator | Wednesday 04 February 2026 03:24:31 +0000 (0:00:09.918) 0:01:24.923 **** 2026-02-04 03:24:44.956154 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:24:44.956162 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:24:44.956171 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:24:44.956179 | orchestrator | 2026-02-04 03:24:44.956188 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-04 03:24:44.956196 | orchestrator | Wednesday 04 February 2026 03:24:36 +0000 (0:00:04.606) 0:01:29.530 **** 2026-02-04 03:24:44.956204 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:24:44.956213 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:24:44.956221 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:24:44.956230 | orchestrator | 2026-02-04 03:24:44.956239 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:24:44.956248 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 03:24:44.956258 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 03:24:44.956267 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 03:24:44.956276 | orchestrator | 2026-02-04 03:24:44.956284 | orchestrator | 2026-02-04 03:24:44.956292 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:24:44.956309 | orchestrator | Wednesday 04 February 2026 03:24:44 +0000 (0:00:08.409) 0:01:37.940 **** 2026-02-04 03:24:44.956318 | orchestrator | =============================================================================== 2026-02-04 03:24:44.956327 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.64s 2026-02-04 03:24:44.956336 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 9.92s 2026-02-04 03:24:44.956360 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 8.41s 2026-02-04 03:24:44.956369 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.20s 2026-02-04 03:24:44.956378 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.09s 2026-02-04 03:24:44.956386 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.47s 2026-02-04 03:24:44.956395 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 4.61s 2026-02-04 03:24:44.956403 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.07s 2026-02-04 03:24:44.956411 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.01s 2026-02-04 03:24:44.956419 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.80s 2026-02-04 03:24:44.956427 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.67s 2026-02-04 03:24:44.956436 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.36s 2026-02-04 03:24:44.956444 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.26s 2026-02-04 03:24:44.956453 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.18s 2026-02-04 03:24:44.956462 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.17s 2026-02-04 03:24:44.956469 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.12s 2026-02-04 03:24:44.956476 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.12s 2026-02-04 03:24:44.956483 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 1.96s 2026-02-04 03:24:44.956491 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.75s 2026-02-04 03:24:44.956499 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.02s 2026-02-04 03:24:47.434693 | orchestrator | 2026-02-04 03:24:47 | INFO  | Task 4971cfef-0b2b-411d-91c2-9e21ee81e0db (kolla-ceph-rgw) was prepared for execution. 2026-02-04 03:24:47.434790 | orchestrator | 2026-02-04 03:24:47 | INFO  | It takes a moment until task 4971cfef-0b2b-411d-91c2-9e21ee81e0db (kolla-ceph-rgw) has been started and output is visible here. 2026-02-04 03:25:23.004338 | orchestrator | 2026-02-04 03:25:23.004442 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:25:23.004457 | orchestrator | 2026-02-04 03:25:23.004467 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:25:23.004477 | orchestrator | Wednesday 04 February 2026 03:24:51 +0000 (0:00:00.273) 0:00:00.273 **** 2026-02-04 03:25:23.004504 | orchestrator | ok: [testbed-manager] 2026-02-04 03:25:23.004523 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:25:23.004533 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:25:23.004541 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:25:23.004550 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:25:23.004559 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:25:23.004568 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:25:23.004576 | orchestrator | 2026-02-04 03:25:23.004585 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:25:23.004594 | orchestrator | Wednesday 04 February 2026 03:24:52 +0000 (0:00:00.862) 0:00:01.136 **** 2026-02-04 03:25:23.004604 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-04 03:25:23.004613 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-04 03:25:23.004622 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-04 03:25:23.004652 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-04 03:25:23.004661 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-04 03:25:23.004670 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-04 03:25:23.004678 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-04 03:25:23.004687 | orchestrator | 2026-02-04 03:25:23.004696 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-04 03:25:23.004704 | orchestrator | 2026-02-04 03:25:23.004713 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-04 03:25:23.004722 | orchestrator | Wednesday 04 February 2026 03:24:53 +0000 (0:00:00.736) 0:00:01.873 **** 2026-02-04 03:25:23.004731 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 03:25:23.004742 | orchestrator | 2026-02-04 03:25:23.004751 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-04 03:25:23.004760 | orchestrator | Wednesday 04 February 2026 03:24:54 +0000 (0:00:01.535) 0:00:03.409 **** 2026-02-04 03:25:23.004768 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-04 03:25:23.004777 | orchestrator | 2026-02-04 03:25:23.004785 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-04 03:25:23.004794 | orchestrator | Wednesday 04 February 2026 03:24:58 +0000 (0:00:03.743) 0:00:07.152 **** 2026-02-04 03:25:23.004803 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-04 03:25:23.004814 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-04 03:25:23.004861 | orchestrator | 2026-02-04 03:25:23.004871 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-04 03:25:23.004880 | orchestrator | Wednesday 04 February 2026 03:25:04 +0000 (0:00:06.156) 0:00:13.309 **** 2026-02-04 03:25:23.004891 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-04 03:25:23.004902 | orchestrator | 2026-02-04 03:25:23.004912 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-04 03:25:23.004922 | orchestrator | Wednesday 04 February 2026 03:25:07 +0000 (0:00:03.063) 0:00:16.373 **** 2026-02-04 03:25:23.004932 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:25:23.004942 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-04 03:25:23.004952 | orchestrator | 2026-02-04 03:25:23.004962 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-04 03:25:23.004973 | orchestrator | Wednesday 04 February 2026 03:25:11 +0000 (0:00:03.699) 0:00:20.073 **** 2026-02-04 03:25:23.004983 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-04 03:25:23.004993 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-04 03:25:23.005003 | orchestrator | 2026-02-04 03:25:23.005014 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-04 03:25:23.005024 | orchestrator | Wednesday 04 February 2026 03:25:17 +0000 (0:00:06.067) 0:00:26.141 **** 2026-02-04 03:25:23.005033 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-04 03:25:23.005041 | orchestrator | 2026-02-04 03:25:23.005050 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:25:23.005059 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:23.005068 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:23.005077 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:23.005092 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:23.005101 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:23.005126 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:23.005136 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:23.005145 | orchestrator | 2026-02-04 03:25:23.005153 | orchestrator | 2026-02-04 03:25:23.005176 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:25:23.005185 | orchestrator | Wednesday 04 February 2026 03:25:22 +0000 (0:00:04.799) 0:00:30.941 **** 2026-02-04 03:25:23.005194 | orchestrator | =============================================================================== 2026-02-04 03:25:23.005203 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.16s 2026-02-04 03:25:23.005211 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.07s 2026-02-04 03:25:23.005220 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.80s 2026-02-04 03:25:23.005228 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.74s 2026-02-04 03:25:23.005237 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.70s 2026-02-04 03:25:23.005245 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.06s 2026-02-04 03:25:23.005254 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.54s 2026-02-04 03:25:23.005263 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2026-02-04 03:25:23.005271 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-02-04 03:25:25.386298 | orchestrator | 2026-02-04 03:25:25 | INFO  | Task 5f465b55-c3ea-46b5-a708-0c57a8593b8b (gnocchi) was prepared for execution. 2026-02-04 03:25:25.386398 | orchestrator | 2026-02-04 03:25:25 | INFO  | It takes a moment until task 5f465b55-c3ea-46b5-a708-0c57a8593b8b (gnocchi) has been started and output is visible here. 2026-02-04 03:25:30.524237 | orchestrator | 2026-02-04 03:25:30.524341 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:25:30.524355 | orchestrator | 2026-02-04 03:25:30.524367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:25:30.524377 | orchestrator | Wednesday 04 February 2026 03:25:29 +0000 (0:00:00.272) 0:00:00.272 **** 2026-02-04 03:25:30.524388 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:25:30.524399 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:25:30.524409 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:25:30.524418 | orchestrator | 2026-02-04 03:25:30.524428 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:25:30.524438 | orchestrator | Wednesday 04 February 2026 03:25:29 +0000 (0:00:00.317) 0:00:00.590 **** 2026-02-04 03:25:30.524448 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-04 03:25:30.524458 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-04 03:25:30.524469 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-04 03:25:30.524479 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-04 03:25:30.524488 | orchestrator | 2026-02-04 03:25:30.524498 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-04 03:25:30.524508 | orchestrator | skipping: no hosts matched 2026-02-04 03:25:30.524518 | orchestrator | 2026-02-04 03:25:30.524528 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:25:30.524538 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:30.524583 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:30.524600 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:25:30.524617 | orchestrator | 2026-02-04 03:25:30.524633 | orchestrator | 2026-02-04 03:25:30.524649 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:25:30.524667 | orchestrator | Wednesday 04 February 2026 03:25:30 +0000 (0:00:00.359) 0:00:00.950 **** 2026-02-04 03:25:30.524683 | orchestrator | =============================================================================== 2026-02-04 03:25:30.524699 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-02-04 03:25:30.524716 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-04 03:25:32.842575 | orchestrator | 2026-02-04 03:25:32 | INFO  | Task d5aba8d7-62af-4e95-b7ad-e7671a7cbab3 (manila) was prepared for execution. 2026-02-04 03:25:32.842669 | orchestrator | 2026-02-04 03:25:32 | INFO  | It takes a moment until task d5aba8d7-62af-4e95-b7ad-e7671a7cbab3 (manila) has been started and output is visible here. 2026-02-04 03:26:13.680373 | orchestrator | 2026-02-04 03:26:13.680501 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:26:13.680520 | orchestrator | 2026-02-04 03:26:13.680533 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:26:13.680545 | orchestrator | Wednesday 04 February 2026 03:25:37 +0000 (0:00:00.271) 0:00:00.271 **** 2026-02-04 03:26:13.680569 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:26:13.680582 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:26:13.680594 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:26:13.680605 | orchestrator | 2026-02-04 03:26:13.680617 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:26:13.680629 | orchestrator | Wednesday 04 February 2026 03:25:37 +0000 (0:00:00.319) 0:00:00.590 **** 2026-02-04 03:26:13.680640 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-04 03:26:13.680652 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-04 03:26:13.680663 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-04 03:26:13.680674 | orchestrator | 2026-02-04 03:26:13.680699 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-04 03:26:13.680711 | orchestrator | 2026-02-04 03:26:13.680722 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-04 03:26:13.680733 | orchestrator | Wednesday 04 February 2026 03:25:37 +0000 (0:00:00.444) 0:00:01.034 **** 2026-02-04 03:26:13.680745 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:26:13.680757 | orchestrator | 2026-02-04 03:26:13.680768 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-04 03:26:13.680779 | orchestrator | Wednesday 04 February 2026 03:25:38 +0000 (0:00:00.591) 0:00:01.626 **** 2026-02-04 03:26:13.680790 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:26:13.680802 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:26:13.680813 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:26:13.680825 | orchestrator | 2026-02-04 03:26:13.680855 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-04 03:26:13.680867 | orchestrator | Wednesday 04 February 2026 03:25:38 +0000 (0:00:00.455) 0:00:02.082 **** 2026-02-04 03:26:13.680878 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-04 03:26:13.680889 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-04 03:26:13.680900 | orchestrator | 2026-02-04 03:26:13.680912 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-04 03:26:13.680950 | orchestrator | Wednesday 04 February 2026 03:25:45 +0000 (0:00:06.243) 0:00:08.325 **** 2026-02-04 03:26:13.680965 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-04 03:26:13.680979 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-04 03:26:13.680992 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-04 03:26:13.681003 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-04 03:26:13.681014 | orchestrator | 2026-02-04 03:26:13.681025 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-04 03:26:13.681036 | orchestrator | Wednesday 04 February 2026 03:25:57 +0000 (0:00:12.521) 0:00:20.847 **** 2026-02-04 03:26:13.681047 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:26:13.681058 | orchestrator | 2026-02-04 03:26:13.681069 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-04 03:26:13.681080 | orchestrator | Wednesday 04 February 2026 03:26:00 +0000 (0:00:03.157) 0:00:24.004 **** 2026-02-04 03:26:13.681091 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:26:13.681103 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-04 03:26:13.681113 | orchestrator | 2026-02-04 03:26:13.681124 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-04 03:26:13.681135 | orchestrator | Wednesday 04 February 2026 03:26:04 +0000 (0:00:03.785) 0:00:27.789 **** 2026-02-04 03:26:13.681146 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:26:13.681157 | orchestrator | 2026-02-04 03:26:13.681168 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-04 03:26:13.681179 | orchestrator | Wednesday 04 February 2026 03:26:07 +0000 (0:00:03.312) 0:00:31.101 **** 2026-02-04 03:26:13.681190 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-04 03:26:13.681201 | orchestrator | 2026-02-04 03:26:13.681212 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-04 03:26:13.681223 | orchestrator | Wednesday 04 February 2026 03:26:11 +0000 (0:00:03.650) 0:00:34.751 **** 2026-02-04 03:26:13.681254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:13.681276 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:13.681298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:13.681311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:13.681325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:13.681337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:13.681357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:23.840410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:23.840554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:23.840572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:23.840584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:23.840596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:23.840608 | orchestrator | 2026-02-04 03:26:23.840622 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-04 03:26:23.840634 | orchestrator | Wednesday 04 February 2026 03:26:13 +0000 (0:00:02.237) 0:00:36.989 **** 2026-02-04 03:26:23.840645 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:26:23.840656 | orchestrator | 2026-02-04 03:26:23.840667 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-04 03:26:23.840678 | orchestrator | Wednesday 04 February 2026 03:26:14 +0000 (0:00:00.549) 0:00:37.538 **** 2026-02-04 03:26:23.840689 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:26:23.840700 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:26:23.840711 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:26:23.840722 | orchestrator | 2026-02-04 03:26:23.840732 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-04 03:26:23.840743 | orchestrator | Wednesday 04 February 2026 03:26:15 +0000 (0:00:00.948) 0:00:38.487 **** 2026-02-04 03:26:23.840755 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-04 03:26:23.840801 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-04 03:26:23.840814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-04 03:26:23.840875 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-04 03:26:23.840889 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-04 03:26:23.840900 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-04 03:26:23.840911 | orchestrator | 2026-02-04 03:26:23.840924 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-04 03:26:23.840937 | orchestrator | Wednesday 04 February 2026 03:26:16 +0000 (0:00:01.725) 0:00:40.212 **** 2026-02-04 03:26:23.840950 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-04 03:26:23.840962 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-04 03:26:23.840975 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-04 03:26:23.840988 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-04 03:26:23.841000 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-04 03:26:23.841012 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-04 03:26:23.841025 | orchestrator | 2026-02-04 03:26:23.841037 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-04 03:26:23.841050 | orchestrator | Wednesday 04 February 2026 03:26:18 +0000 (0:00:01.144) 0:00:41.357 **** 2026-02-04 03:26:23.841062 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-04 03:26:23.841076 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-04 03:26:23.841088 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-04 03:26:23.841100 | orchestrator | 2026-02-04 03:26:23.841113 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-04 03:26:23.841126 | orchestrator | Wednesday 04 February 2026 03:26:18 +0000 (0:00:00.670) 0:00:42.028 **** 2026-02-04 03:26:23.841139 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:26:23.841151 | orchestrator | 2026-02-04 03:26:23.841164 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-04 03:26:23.841177 | orchestrator | Wednesday 04 February 2026 03:26:18 +0000 (0:00:00.154) 0:00:42.183 **** 2026-02-04 03:26:23.841189 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:26:23.841202 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:26:23.841214 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:26:23.841226 | orchestrator | 2026-02-04 03:26:23.841238 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-04 03:26:23.841251 | orchestrator | Wednesday 04 February 2026 03:26:19 +0000 (0:00:00.525) 0:00:42.708 **** 2026-02-04 03:26:23.841271 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:26:23.841284 | orchestrator | 2026-02-04 03:26:23.841295 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-04 03:26:23.841306 | orchestrator | Wednesday 04 February 2026 03:26:20 +0000 (0:00:00.578) 0:00:43.286 **** 2026-02-04 03:26:23.841326 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:24.676209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:24.676351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:24.676380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:24.676619 | orchestrator | 2026-02-04 03:26:24.676632 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-04 03:26:24.676644 | orchestrator | Wednesday 04 February 2026 03:26:23 +0000 (0:00:03.875) 0:00:47.162 **** 2026-02-04 03:26:24.676666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:25.321421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321557 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:26:25.321571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:25.321610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321683 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:26:25.321701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:25.321720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:25.321788 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:26:25.321805 | orchestrator | 2026-02-04 03:26:25.321823 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-04 03:26:25.321903 | orchestrator | Wednesday 04 February 2026 03:26:24 +0000 (0:00:00.853) 0:00:48.015 **** 2026-02-04 03:26:25.321941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:29.873131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873305 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:26:29.873320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:29.873333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873403 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:26:29.873415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:29.873436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:29.873471 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:26:29.873483 | orchestrator | 2026-02-04 03:26:29.873496 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-04 03:26:29.873509 | orchestrator | Wednesday 04 February 2026 03:26:25 +0000 (0:00:00.879) 0:00:48.894 **** 2026-02-04 03:26:29.873533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:36.492706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:36.492896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:36.492915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.492927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.492952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.492980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.492991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.493009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.493019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.493028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.493038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:36.493048 | orchestrator | 2026-02-04 03:26:36.493059 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-04 03:26:36.493073 | orchestrator | Wednesday 04 February 2026 03:26:30 +0000 (0:00:04.511) 0:00:53.406 **** 2026-02-04 03:26:36.493090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:40.850690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:40.850800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:26:40.850817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:40.850830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:40.850886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:40.850916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:40.850981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:40.850995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:40.851007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:40.851020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:40.851037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:26:40.851050 | orchestrator | 2026-02-04 03:26:40.851063 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-04 03:26:40.851076 | orchestrator | Wednesday 04 February 2026 03:26:36 +0000 (0:00:06.410) 0:00:59.816 **** 2026-02-04 03:26:40.851095 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-04 03:26:40.851108 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-04 03:26:40.851118 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-04 03:26:40.851129 | orchestrator | 2026-02-04 03:26:40.851141 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-04 03:26:40.851152 | orchestrator | Wednesday 04 February 2026 03:26:40 +0000 (0:00:03.684) 0:01:03.501 **** 2026-02-04 03:26:40.851172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:44.094717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094825 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:26:44.094881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:44.094908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094943 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:26:44.094950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-04 03:26:44.094957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 03:26:44.094995 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:26:44.095002 | orchestrator | 2026-02-04 03:26:44.095010 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-04 03:26:44.095018 | orchestrator | Wednesday 04 February 2026 03:26:40 +0000 (0:00:00.668) 0:01:04.169 **** 2026-02-04 03:26:44.095031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:27:23.343720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:27:23.343816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-04 03:27:23.343895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-04 03:27:23.343994 | orchestrator | 2026-02-04 03:27:23.344003 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-04 03:27:23.344012 | orchestrator | Wednesday 04 February 2026 03:26:44 +0000 (0:00:03.256) 0:01:07.425 **** 2026-02-04 03:27:23.344020 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:27:23.344028 | orchestrator | 2026-02-04 03:27:23.344036 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-04 03:27:23.344043 | orchestrator | Wednesday 04 February 2026 03:26:46 +0000 (0:00:02.072) 0:01:09.498 **** 2026-02-04 03:27:23.344051 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:27:23.344058 | orchestrator | 2026-02-04 03:27:23.344065 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-04 03:27:23.344073 | orchestrator | Wednesday 04 February 2026 03:26:48 +0000 (0:00:02.312) 0:01:11.811 **** 2026-02-04 03:27:23.344080 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:27:23.344087 | orchestrator | 2026-02-04 03:27:23.344095 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-04 03:27:23.344103 | orchestrator | Wednesday 04 February 2026 03:27:23 +0000 (0:00:34.528) 0:01:46.339 **** 2026-02-04 03:27:23.344110 | orchestrator | 2026-02-04 03:27:23.344123 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-04 03:28:14.522982 | orchestrator | Wednesday 04 February 2026 03:27:23 +0000 (0:00:00.074) 0:01:46.413 **** 2026-02-04 03:28:14.523122 | orchestrator | 2026-02-04 03:28:14.523143 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-04 03:28:14.523156 | orchestrator | Wednesday 04 February 2026 03:27:23 +0000 (0:00:00.075) 0:01:46.489 **** 2026-02-04 03:28:14.523168 | orchestrator | 2026-02-04 03:28:14.523192 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-04 03:28:14.523205 | orchestrator | Wednesday 04 February 2026 03:27:23 +0000 (0:00:00.073) 0:01:46.563 **** 2026-02-04 03:28:14.523217 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:28:14.523230 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:28:14.523242 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:28:14.523254 | orchestrator | 2026-02-04 03:28:14.523266 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-04 03:28:14.523305 | orchestrator | Wednesday 04 February 2026 03:27:37 +0000 (0:00:14.059) 0:02:00.622 **** 2026-02-04 03:28:14.523318 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:28:14.523329 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:28:14.523341 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:28:14.523353 | orchestrator | 2026-02-04 03:28:14.523364 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-04 03:28:14.523376 | orchestrator | Wednesday 04 February 2026 03:27:47 +0000 (0:00:10.489) 0:02:11.112 **** 2026-02-04 03:28:14.523388 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:28:14.523399 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:28:14.523411 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:28:14.523422 | orchestrator | 2026-02-04 03:28:14.523434 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-04 03:28:14.523445 | orchestrator | Wednesday 04 February 2026 03:27:57 +0000 (0:00:09.611) 0:02:20.723 **** 2026-02-04 03:28:14.523457 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:28:14.523468 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:28:14.523480 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:28:14.523491 | orchestrator | 2026-02-04 03:28:14.523503 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:28:14.523516 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 03:28:14.523529 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 03:28:14.523541 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 03:28:14.523553 | orchestrator | 2026-02-04 03:28:14.523564 | orchestrator | 2026-02-04 03:28:14.523576 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:28:14.523588 | orchestrator | Wednesday 04 February 2026 03:28:14 +0000 (0:00:16.559) 0:02:37.283 **** 2026-02-04 03:28:14.523599 | orchestrator | =============================================================================== 2026-02-04 03:28:14.523611 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 34.53s 2026-02-04 03:28:14.523637 | orchestrator | manila : Restart manila-share container -------------------------------- 16.56s 2026-02-04 03:28:14.523649 | orchestrator | manila : Restart manila-api container ---------------------------------- 14.06s 2026-02-04 03:28:14.523661 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.52s 2026-02-04 03:28:14.523672 | orchestrator | manila : Restart manila-data container --------------------------------- 10.49s 2026-02-04 03:28:14.523684 | orchestrator | manila : Restart manila-scheduler container ----------------------------- 9.61s 2026-02-04 03:28:14.523695 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.41s 2026-02-04 03:28:14.523707 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.24s 2026-02-04 03:28:14.523718 | orchestrator | manila : Copying over config.json files for services -------------------- 4.51s 2026-02-04 03:28:14.523730 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.88s 2026-02-04 03:28:14.523741 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.79s 2026-02-04 03:28:14.523753 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.68s 2026-02-04 03:28:14.523765 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.65s 2026-02-04 03:28:14.523776 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.31s 2026-02-04 03:28:14.523788 | orchestrator | manila : Check manila containers ---------------------------------------- 3.26s 2026-02-04 03:28:14.523799 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.16s 2026-02-04 03:28:14.523811 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.31s 2026-02-04 03:28:14.523831 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.24s 2026-02-04 03:28:14.523843 | orchestrator | manila : Creating Manila database --------------------------------------- 2.07s 2026-02-04 03:28:14.523854 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.73s 2026-02-04 03:28:14.874174 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-04 03:28:27.143317 | orchestrator | 2026-02-04 03:28:27 | INFO  | Task 99a863fb-84fa-4dfb-9969-972bf20ecb85 (netdata) was prepared for execution. 2026-02-04 03:28:27.143424 | orchestrator | 2026-02-04 03:28:27 | INFO  | It takes a moment until task 99a863fb-84fa-4dfb-9969-972bf20ecb85 (netdata) has been started and output is visible here. 2026-02-04 03:30:02.986448 | orchestrator | 2026-02-04 03:30:02.986546 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:30:02.986559 | orchestrator | 2026-02-04 03:30:02.986567 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:30:02.986575 | orchestrator | Wednesday 04 February 2026 03:28:31 +0000 (0:00:00.240) 0:00:00.240 **** 2026-02-04 03:30:02.986583 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-04 03:30:02.986591 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-04 03:30:02.986598 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-04 03:30:02.986606 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-04 03:30:02.986613 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-04 03:30:02.986620 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-04 03:30:02.986628 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-04 03:30:02.986635 | orchestrator | 2026-02-04 03:30:02.986642 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-04 03:30:02.986649 | orchestrator | 2026-02-04 03:30:02.986656 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-04 03:30:02.986663 | orchestrator | Wednesday 04 February 2026 03:28:32 +0000 (0:00:00.878) 0:00:01.119 **** 2026-02-04 03:30:02.986672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 03:30:02.986682 | orchestrator | 2026-02-04 03:30:02.986689 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-04 03:30:02.986697 | orchestrator | Wednesday 04 February 2026 03:28:33 +0000 (0:00:01.346) 0:00:02.465 **** 2026-02-04 03:30:02.986704 | orchestrator | ok: [testbed-manager] 2026-02-04 03:30:02.986713 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:30:02.986720 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:30:02.986727 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:30:02.986735 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:30:02.986742 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:30:02.986749 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:30:02.986756 | orchestrator | 2026-02-04 03:30:02.986763 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-04 03:30:02.986771 | orchestrator | Wednesday 04 February 2026 03:28:35 +0000 (0:00:01.798) 0:00:04.263 **** 2026-02-04 03:30:02.986778 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:30:02.986785 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:30:02.986792 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:30:02.986799 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:30:02.986806 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:30:02.986814 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:30:02.986821 | orchestrator | ok: [testbed-manager] 2026-02-04 03:30:02.986828 | orchestrator | 2026-02-04 03:30:02.986849 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-04 03:30:02.986876 | orchestrator | Wednesday 04 February 2026 03:28:37 +0000 (0:00:02.245) 0:00:06.508 **** 2026-02-04 03:30:02.986884 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:30:02.986925 | orchestrator | changed: [testbed-manager] 2026-02-04 03:30:02.986963 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:30:02.986971 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:30:02.986978 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:30:02.986987 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:30:02.986996 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:30:02.987004 | orchestrator | 2026-02-04 03:30:02.987013 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-04 03:30:02.987022 | orchestrator | Wednesday 04 February 2026 03:28:39 +0000 (0:00:01.502) 0:00:08.011 **** 2026-02-04 03:30:02.987030 | orchestrator | changed: [testbed-manager] 2026-02-04 03:30:02.987039 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:30:02.987047 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:30:02.987055 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:30:02.987064 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:30:02.987072 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:30:02.987080 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:30:02.987089 | orchestrator | 2026-02-04 03:30:02.987097 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-04 03:30:02.987106 | orchestrator | Wednesday 04 February 2026 03:28:58 +0000 (0:00:19.297) 0:00:27.309 **** 2026-02-04 03:30:02.987115 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:30:02.987123 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:30:02.987132 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:30:02.987140 | orchestrator | changed: [testbed-manager] 2026-02-04 03:30:02.987149 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:30:02.987157 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:30:02.987166 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:30:02.987175 | orchestrator | 2026-02-04 03:30:02.987183 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-04 03:30:02.987192 | orchestrator | Wednesday 04 February 2026 03:29:37 +0000 (0:00:38.656) 0:01:05.965 **** 2026-02-04 03:30:02.987201 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 03:30:02.987212 | orchestrator | 2026-02-04 03:30:02.987220 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-04 03:30:02.987229 | orchestrator | Wednesday 04 February 2026 03:29:38 +0000 (0:00:01.577) 0:01:07.542 **** 2026-02-04 03:30:02.987238 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-04 03:30:02.987247 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-04 03:30:02.987255 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-04 03:30:02.987264 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-04 03:30:02.987286 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-04 03:30:02.987295 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-04 03:30:02.987303 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-04 03:30:02.987312 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-04 03:30:02.987320 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-04 03:30:02.987328 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-04 03:30:02.987337 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-04 03:30:02.987345 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-04 03:30:02.987352 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-04 03:30:02.987359 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-04 03:30:02.987366 | orchestrator | 2026-02-04 03:30:02.987374 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-04 03:30:02.987389 | orchestrator | Wednesday 04 February 2026 03:29:42 +0000 (0:00:03.535) 0:01:11.078 **** 2026-02-04 03:30:02.987396 | orchestrator | ok: [testbed-manager] 2026-02-04 03:30:02.987403 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:30:02.987410 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:30:02.987418 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:30:02.987425 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:30:02.987432 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:30:02.987449 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:30:02.987456 | orchestrator | 2026-02-04 03:30:02.987464 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-04 03:30:02.987471 | orchestrator | Wednesday 04 February 2026 03:29:43 +0000 (0:00:01.275) 0:01:12.353 **** 2026-02-04 03:30:02.987478 | orchestrator | changed: [testbed-manager] 2026-02-04 03:30:02.987485 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:30:02.987493 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:30:02.987500 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:30:02.987507 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:30:02.987514 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:30:02.987521 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:30:02.987528 | orchestrator | 2026-02-04 03:30:02.987536 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-04 03:30:02.987543 | orchestrator | Wednesday 04 February 2026 03:29:44 +0000 (0:00:01.230) 0:01:13.583 **** 2026-02-04 03:30:02.987550 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:30:02.987557 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:30:02.987565 | orchestrator | ok: [testbed-manager] 2026-02-04 03:30:02.987572 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:30:02.987579 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:30:02.987586 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:30:02.987593 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:30:02.987600 | orchestrator | 2026-02-04 03:30:02.987608 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-04 03:30:02.987615 | orchestrator | Wednesday 04 February 2026 03:29:46 +0000 (0:00:01.233) 0:01:14.816 **** 2026-02-04 03:30:02.987622 | orchestrator | ok: [testbed-manager] 2026-02-04 03:30:02.987629 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:30:02.987636 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:30:02.987644 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:30:02.987651 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:30:02.987658 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:30:02.987665 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:30:02.987672 | orchestrator | 2026-02-04 03:30:02.987680 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-04 03:30:02.987690 | orchestrator | Wednesday 04 February 2026 03:29:47 +0000 (0:00:01.633) 0:01:16.450 **** 2026-02-04 03:30:02.987698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-04 03:30:02.987707 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 03:30:02.987714 | orchestrator | 2026-02-04 03:30:02.987722 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-04 03:30:02.987729 | orchestrator | Wednesday 04 February 2026 03:29:49 +0000 (0:00:01.367) 0:01:17.818 **** 2026-02-04 03:30:02.987736 | orchestrator | changed: [testbed-manager] 2026-02-04 03:30:02.987743 | orchestrator | 2026-02-04 03:30:02.987751 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-04 03:30:02.987758 | orchestrator | Wednesday 04 February 2026 03:29:51 +0000 (0:00:02.178) 0:01:19.996 **** 2026-02-04 03:30:02.987765 | orchestrator | changed: [testbed-manager] 2026-02-04 03:30:02.987772 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:30:02.987780 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:30:02.987792 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:30:02.987799 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:30:02.987806 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:30:02.987813 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:30:02.987820 | orchestrator | 2026-02-04 03:30:02.987828 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:30:02.987835 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:30:02.987843 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:30:02.987850 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:30:02.987857 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:30:02.987869 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:30:03.411712 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:30:03.411814 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:30:03.411830 | orchestrator | 2026-02-04 03:30:03.411843 | orchestrator | 2026-02-04 03:30:03.411854 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:30:03.411867 | orchestrator | Wednesday 04 February 2026 03:30:02 +0000 (0:00:11.637) 0:01:31.633 **** 2026-02-04 03:30:03.411879 | orchestrator | =============================================================================== 2026-02-04 03:30:03.411940 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 38.66s 2026-02-04 03:30:03.411953 | orchestrator | osism.services.netdata : Add repository -------------------------------- 19.30s 2026-02-04 03:30:03.411964 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.64s 2026-02-04 03:30:03.411975 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.54s 2026-02-04 03:30:03.411986 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.25s 2026-02-04 03:30:03.411997 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.18s 2026-02-04 03:30:03.412008 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.80s 2026-02-04 03:30:03.412019 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.63s 2026-02-04 03:30:03.412030 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.58s 2026-02-04 03:30:03.412040 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.50s 2026-02-04 03:30:03.412051 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.37s 2026-02-04 03:30:03.412062 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.35s 2026-02-04 03:30:03.412074 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.28s 2026-02-04 03:30:03.412085 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.23s 2026-02-04 03:30:03.412096 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.23s 2026-02-04 03:30:03.412107 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2026-02-04 03:30:07.997876 | orchestrator | 2026-02-04 03:30:07 | INFO  | Task 9b411fdd-e23a-4a02-889d-cbd16f94665c (prometheus) was prepared for execution. 2026-02-04 03:30:07.998072 | orchestrator | 2026-02-04 03:30:07 | INFO  | It takes a moment until task 9b411fdd-e23a-4a02-889d-cbd16f94665c (prometheus) has been started and output is visible here. 2026-02-04 03:30:17.523124 | orchestrator | 2026-02-04 03:30:17.523369 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:30:17.523401 | orchestrator | 2026-02-04 03:30:17.523419 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:30:17.523437 | orchestrator | Wednesday 04 February 2026 03:30:12 +0000 (0:00:00.293) 0:00:00.293 **** 2026-02-04 03:30:17.523454 | orchestrator | ok: [testbed-manager] 2026-02-04 03:30:17.523473 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:30:17.523490 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:30:17.523508 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:30:17.523547 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:30:17.523565 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:30:17.523583 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:30:17.523600 | orchestrator | 2026-02-04 03:30:17.523618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:30:17.523636 | orchestrator | Wednesday 04 February 2026 03:30:13 +0000 (0:00:00.851) 0:00:01.144 **** 2026-02-04 03:30:17.523656 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-04 03:30:17.523675 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-04 03:30:17.523694 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-04 03:30:17.523711 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-04 03:30:17.523728 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-04 03:30:17.523746 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-04 03:30:17.523763 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-04 03:30:17.523780 | orchestrator | 2026-02-04 03:30:17.523798 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-04 03:30:17.523815 | orchestrator | 2026-02-04 03:30:17.523880 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-04 03:30:17.523981 | orchestrator | Wednesday 04 February 2026 03:30:14 +0000 (0:00:01.003) 0:00:02.148 **** 2026-02-04 03:30:17.524001 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 03:30:17.524015 | orchestrator | 2026-02-04 03:30:17.524025 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-04 03:30:17.524041 | orchestrator | Wednesday 04 February 2026 03:30:15 +0000 (0:00:01.404) 0:00:03.552 **** 2026-02-04 03:30:17.524078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:17.524095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:17.524107 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 03:30:17.524149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:17.524204 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:17.524224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:17.524244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:17.524264 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:17.524284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:17.524303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:17.524331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:17.524342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:17.524364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:18.634677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.634784 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.634800 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.634813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.634850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.634879 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 03:30:18.634946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:18.634962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.634975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.634987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.634998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.635018 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:18.635030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:18.635047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:18.635066 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:23.739675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:23.739808 | orchestrator | 2026-02-04 03:30:23.739820 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-04 03:30:23.739827 | orchestrator | Wednesday 04 February 2026 03:30:18 +0000 (0:00:02.993) 0:00:06.545 **** 2026-02-04 03:30:23.739834 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 03:30:23.739841 | orchestrator | 2026-02-04 03:30:23.739847 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-04 03:30:23.739853 | orchestrator | Wednesday 04 February 2026 03:30:20 +0000 (0:00:01.677) 0:00:08.222 **** 2026-02-04 03:30:23.739860 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 03:30:23.739885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:23.739910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:23.739929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:23.739935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:23.739983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:23.739991 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:23.739997 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:23.740009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:23.740015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:23.740021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:23.740031 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:23.740038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:23.740048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:25.977755 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:25.977887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:25.977964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:25.977978 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:25.977991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:25.978071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:25.978112 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 03:30:25.978138 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:25.978150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:25.978163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:25.978174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:25.978193 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:25.978205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:25.978217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:25.978240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:26.954591 | orchestrator | 2026-02-04 03:30:26.954695 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-04 03:30:26.954713 | orchestrator | Wednesday 04 February 2026 03:30:25 +0000 (0:00:05.665) 0:00:13.888 **** 2026-02-04 03:30:26.954729 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 03:30:26.954746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:26.954758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:26.954813 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 03:30:26.954829 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:26.954863 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:30:26.954937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:26.954961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:26.954982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:26.955002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:26.955023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:26.955042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:26.955054 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:30:26.955066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:26.955087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:26.955111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:27.563770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:27.563871 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:30:27.563890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:27.563976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:27.563990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:27.564020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:27.564062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:27.564075 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:30:27.564086 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:27.564117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:27.564130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 03:30:27.564141 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:30:27.564152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:27.564164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:27.564181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 03:30:27.564200 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:30:27.564212 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:27.564223 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:27.564235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 03:30:27.564252 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:30:28.594582 | orchestrator | 2026-02-04 03:30:28.594692 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-04 03:30:28.594707 | orchestrator | Wednesday 04 February 2026 03:30:27 +0000 (0:00:01.582) 0:00:15.471 **** 2026-02-04 03:30:28.594821 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-04 03:30:28.594841 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:28.594854 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:28.595096 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-04 03:30:28.595121 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:28.595156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:28.595170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:28.595498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:28.595515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:28.595528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:28.595559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:28.595571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:28.595637 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:30:28.595654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:28.595678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:29.737468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:29.737577 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:30:29.737596 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:30:29.737610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:29.737623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:29.737676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:29.737690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:29.737703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 03:30:29.737714 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:30:29.737726 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:29.737756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:29.737768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 03:30:29.737780 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:30:29.737791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:29.737811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:29.737828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 03:30:29.737840 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:30:29.737851 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 03:30:29.737863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 03:30:29.737882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 03:30:33.432986 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:30:33.433093 | orchestrator | 2026-02-04 03:30:33.433110 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-04 03:30:33.433124 | orchestrator | Wednesday 04 February 2026 03:30:29 +0000 (0:00:02.174) 0:00:17.645 **** 2026-02-04 03:30:33.433138 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 03:30:33.433179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:33.433207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:33.433220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:33.433231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:33.433242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:33.433271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:33.433283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:30:33.433295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:33.433326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:33.433354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:33.433372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:33.433389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:33.433406 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:33.433435 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:37.074647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:37.074784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:37.074802 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:37.074831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:37.074844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:30:37.074856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:37.074870 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 03:30:37.074964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:37.074981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:37.074993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:30:37.075011 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:37.075023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:37.075040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:37.075061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:30:37.075082 | orchestrator | 2026-02-04 03:30:37.075115 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-04 03:30:37.075138 | orchestrator | Wednesday 04 February 2026 03:30:36 +0000 (0:00:06.427) 0:00:24.073 **** 2026-02-04 03:30:37.075158 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 03:30:37.075178 | orchestrator | 2026-02-04 03:30:37.075196 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-04 03:30:37.075218 | orchestrator | Wednesday 04 February 2026 03:30:37 +0000 (0:00:00.924) 0:00:24.997 **** 2026-02-04 03:30:39.839380 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099473, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8841424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839511 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099496, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9049034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839564 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099473, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8841424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839589 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099473, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8841424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:30:39.839611 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099473, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8841424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839624 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099473, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8841424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839682 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1099462, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8816757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839695 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099473, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8841424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839707 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1099473, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8841424, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839724 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099496, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9049034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839736 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099496, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9049034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839747 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099496, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9049034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839758 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099496, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9049034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:39.839785 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099488, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.902676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581485 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1099462, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8816757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581573 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099496, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9049034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581598 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1099462, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8816757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581636 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1099462, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8816757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581644 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1099462, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8816757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581648 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099488, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.902676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581668 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099457, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581684 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1099462, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8816757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581689 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1099496, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9049034, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:30:41.581697 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099474, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8845558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581703 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099488, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.902676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581716 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099488, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.902676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581728 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099457, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581774 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099488, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.902676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:41.581787 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099457, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231404 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1099487, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.901676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231529 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099488, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.902676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231564 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099457, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231577 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099457, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231612 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099475, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.884676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231637 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099474, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8845558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231666 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099474, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8845558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231711 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099457, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231730 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099474, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8845558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231757 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099469, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8835483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231777 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1099462, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8816757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:30:43.231811 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099474, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8845558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231829 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1099487, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.901676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231846 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1099487, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.901676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:43.231867 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1099487, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.901676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.689948 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099474, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8845558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690122 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099494, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9044933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690143 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099475, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.884676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690178 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1099487, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.901676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690189 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099475, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.884676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690200 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099256, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8019116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690211 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1099487, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.901676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690239 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099475, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.884676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690255 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099469, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8835483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690273 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1099488, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.902676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:30:44.690284 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099510, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.907676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690294 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099475, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.884676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690304 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099475, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.884676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690314 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099469, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8835483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:44.690331 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099469, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8835483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.087807 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099494, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9044933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.087988 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099492, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9037943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088009 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099469, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8835483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088021 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099256, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8019116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088033 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099469, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8835483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088045 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099494, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9044933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088057 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099494, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9044933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088095 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099459, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8800762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088116 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099510, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.907676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088128 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099494, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9044933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088140 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1099457, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:30:46.088151 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099492, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9037943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088163 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1099454, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088175 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099256, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8019116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:46.088205 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099494, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9044933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.251810 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099256, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8019116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.251948 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099510, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.907676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.251967 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099256, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8019116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.251980 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099459, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8800762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.251993 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099484, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.900676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252004 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099492, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9037943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252066 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1099454, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252098 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099478, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.887676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252111 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099256, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8019116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252122 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099510, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.907676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252133 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099459, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8800762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252145 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099510, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.907676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252157 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1099474, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8845558, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:30:47.252235 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099484, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.900676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:47.252267 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099506, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.906676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342298 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:30:48.342400 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099492, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9037943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342419 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099492, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9037943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342433 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099510, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.907676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342446 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099478, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.887676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342481 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1099454, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342508 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099459, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8800762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342551 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099459, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8800762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342574 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099506, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.906676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342592 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:30:48.342605 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099492, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9037943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342617 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099484, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.900676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342628 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1099454, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342648 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1099454, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342665 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099459, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8800762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:48.342684 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1099487, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.901676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:30:53.263264 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099484, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.900676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263375 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099478, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.887676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263394 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1099454, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263407 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099484, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.900676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263442 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099478, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.887676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263470 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099506, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.906676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263484 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:30:53.263499 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099484, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.900676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263530 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099506, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.906676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263542 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:30:53.263554 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099478, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.887676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263566 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099478, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.887676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263586 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1099475, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.884676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:30:53.263598 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099506, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.906676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263609 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:30:53.263625 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099506, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.906676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-04 03:30:53.263637 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:30:53.263657 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1099469, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8835483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.073938 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099494, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9044933, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074114 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099256, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8019116, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074135 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1099510, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.907676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074172 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1099492, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.9037943, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074186 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1099459, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8800762, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074228 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1099454, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.8788521, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074241 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1099484, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.900676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074271 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1099478, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.887676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074285 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1099506, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.906676, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-04 03:31:03.074297 | orchestrator | 2026-02-04 03:31:03.074319 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-04 03:31:03.074332 | orchestrator | Wednesday 04 February 2026 03:31:00 +0000 (0:00:23.497) 0:00:48.494 **** 2026-02-04 03:31:03.074343 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 03:31:03.074356 | orchestrator | 2026-02-04 03:31:03.074367 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-04 03:31:03.074378 | orchestrator | Wednesday 04 February 2026 03:31:01 +0000 (0:00:00.732) 0:00:49.227 **** 2026-02-04 03:31:03.074389 | orchestrator | [WARNING]: Skipped 2026-02-04 03:31:03.074402 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074414 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-04 03:31:03.074425 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074436 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-04 03:31:03.074447 | orchestrator | [WARNING]: Skipped 2026-02-04 03:31:03.074458 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074469 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-04 03:31:03.074480 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074491 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-04 03:31:03.074502 | orchestrator | [WARNING]: Skipped 2026-02-04 03:31:03.074513 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074524 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-04 03:31:03.074535 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074546 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-04 03:31:03.074557 | orchestrator | [WARNING]: Skipped 2026-02-04 03:31:03.074568 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074578 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-04 03:31:03.074589 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074600 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-04 03:31:03.074611 | orchestrator | [WARNING]: Skipped 2026-02-04 03:31:03.074622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074633 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-04 03:31:03.074644 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074655 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-04 03:31:03.074671 | orchestrator | [WARNING]: Skipped 2026-02-04 03:31:03.074682 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074693 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-04 03:31:03.074709 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074728 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-04 03:31:03.074747 | orchestrator | [WARNING]: Skipped 2026-02-04 03:31:03.074777 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074797 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-04 03:31:03.074815 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-04 03:31:03.074835 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-04 03:31:03.074854 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:31:03.074873 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-04 03:31:03.074891 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 03:31:03.074938 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-04 03:31:03.074970 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-04 03:31:03.074990 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-04 03:31:03.075009 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-04 03:31:03.075028 | orchestrator | 2026-02-04 03:31:03.075060 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-04 03:31:33.843345 | orchestrator | Wednesday 04 February 2026 03:31:03 +0000 (0:00:01.760) 0:00:50.987 **** 2026-02-04 03:31:33.843462 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 03:31:33.843479 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:31:33.843493 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 03:31:33.843505 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:33.843516 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 03:31:33.843528 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:31:33.843539 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 03:31:33.843551 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:31:33.843563 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 03:31:33.843574 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:33.843585 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-04 03:31:33.843596 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:33.843607 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-04 03:31:33.843618 | orchestrator | 2026-02-04 03:31:33.843630 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-04 03:31:33.843642 | orchestrator | Wednesday 04 February 2026 03:31:20 +0000 (0:00:17.073) 0:01:08.061 **** 2026-02-04 03:31:33.843653 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 03:31:33.843665 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:31:33.843676 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 03:31:33.843687 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:31:33.843698 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 03:31:33.843709 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:31:33.843720 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 03:31:33.843731 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:33.843742 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 03:31:33.843753 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:33.843764 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-04 03:31:33.843775 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:33.843786 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-04 03:31:33.843798 | orchestrator | 2026-02-04 03:31:33.843809 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-04 03:31:33.843821 | orchestrator | Wednesday 04 February 2026 03:31:22 +0000 (0:00:02.745) 0:01:10.806 **** 2026-02-04 03:31:33.843832 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 03:31:33.843845 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:31:33.843856 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 03:31:33.843869 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:31:33.843908 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 03:31:33.843950 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:33.843966 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 03:31:33.843978 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:31:33.844006 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 03:31:33.844019 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:33.844032 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-04 03:31:33.844045 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:33.844057 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-04 03:31:33.844069 | orchestrator | 2026-02-04 03:31:33.844083 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-04 03:31:33.844096 | orchestrator | Wednesday 04 February 2026 03:31:24 +0000 (0:00:01.778) 0:01:12.584 **** 2026-02-04 03:31:33.844109 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 03:31:33.844122 | orchestrator | 2026-02-04 03:31:33.844135 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-04 03:31:33.844148 | orchestrator | Wednesday 04 February 2026 03:31:25 +0000 (0:00:00.755) 0:01:13.339 **** 2026-02-04 03:31:33.844160 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:31:33.844173 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:31:33.844184 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:31:33.844195 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:31:33.844223 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:33.844235 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:33.844246 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:33.844257 | orchestrator | 2026-02-04 03:31:33.844268 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-04 03:31:33.844279 | orchestrator | Wednesday 04 February 2026 03:31:26 +0000 (0:00:00.736) 0:01:14.076 **** 2026-02-04 03:31:33.844290 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:31:33.844300 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:33.844311 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:33.844322 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:33.844333 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:31:33.844343 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:31:33.844354 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:31:33.844365 | orchestrator | 2026-02-04 03:31:33.844376 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-04 03:31:33.844387 | orchestrator | Wednesday 04 February 2026 03:31:28 +0000 (0:00:02.190) 0:01:16.266 **** 2026-02-04 03:31:33.844398 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 03:31:33.844409 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:31:33.844419 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 03:31:33.844430 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:31:33.844441 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 03:31:33.844452 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 03:31:33.844463 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:31:33.844474 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:31:33.844484 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 03:31:33.844495 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:33.844515 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 03:31:33.844526 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:33.844537 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-04 03:31:33.844548 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:33.844558 | orchestrator | 2026-02-04 03:31:33.844569 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-04 03:31:33.844580 | orchestrator | Wednesday 04 February 2026 03:31:29 +0000 (0:00:01.460) 0:01:17.727 **** 2026-02-04 03:31:33.844591 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 03:31:33.844602 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:31:33.844612 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 03:31:33.844623 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:31:33.844634 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 03:31:33.844645 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:31:33.844656 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 03:31:33.844666 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:33.844677 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 03:31:33.844688 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:33.844699 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-04 03:31:33.844710 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-04 03:31:33.844721 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:33.844731 | orchestrator | 2026-02-04 03:31:33.844742 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-04 03:31:33.844753 | orchestrator | Wednesday 04 February 2026 03:31:31 +0000 (0:00:01.436) 0:01:19.163 **** 2026-02-04 03:31:33.844770 | orchestrator | [WARNING]: Skipped 2026-02-04 03:31:33.844783 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-04 03:31:33.844794 | orchestrator | due to this access issue: 2026-02-04 03:31:33.844805 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-04 03:31:33.844816 | orchestrator | not a directory 2026-02-04 03:31:33.844827 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 03:31:33.844838 | orchestrator | 2026-02-04 03:31:33.844848 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-04 03:31:33.844859 | orchestrator | Wednesday 04 February 2026 03:31:32 +0000 (0:00:01.163) 0:01:20.326 **** 2026-02-04 03:31:33.844870 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:31:33.844881 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:31:33.844892 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:31:33.844903 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:31:33.844914 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:33.844949 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:33.844968 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:33.844987 | orchestrator | 2026-02-04 03:31:33.845004 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-04 03:31:33.845023 | orchestrator | Wednesday 04 February 2026 03:31:33 +0000 (0:00:00.959) 0:01:21.286 **** 2026-02-04 03:31:33.845036 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:31:33.845046 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:31:33.845057 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:31:33.845075 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:31:36.593425 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:31:36.593554 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:31:36.593568 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:31:36.593595 | orchestrator | 2026-02-04 03:31:36.593607 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-04 03:31:36.593619 | orchestrator | Wednesday 04 February 2026 03:31:34 +0000 (0:00:00.919) 0:01:22.206 **** 2026-02-04 03:31:36.593632 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-04 03:31:36.593647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:31:36.593658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:31:36.593668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:31:36.593696 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:31:36.593707 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:31:36.593734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:31:36.593772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-04 03:31:36.593784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:36.593794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:36.593805 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:31:36.593816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:36.593831 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:31:36.593842 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:31:36.593867 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:31:40.180507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:40.180620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:40.180637 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:31:40.180653 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-04 03:31:40.180683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:40.180719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:31:40.180732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-04 03:31:40.180763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:31:40.180776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:31:40.180788 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:40.180799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-04 03:31:40.180816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:40.180829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:40.180848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 03:31:40.180861 | orchestrator | 2026-02-04 03:31:40.180874 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-04 03:31:40.180887 | orchestrator | Wednesday 04 February 2026 03:31:38 +0000 (0:00:03.960) 0:01:26.166 **** 2026-02-04 03:31:40.180898 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 03:31:40.180911 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:31:40.180979 | orchestrator | 2026-02-04 03:31:40.181002 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 03:33:21.948503 | orchestrator | Wednesday 04 February 2026 03:31:39 +0000 (0:00:01.218) 0:01:27.384 **** 2026-02-04 03:33:21.948634 | orchestrator | 2026-02-04 03:33:21.948652 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 03:33:21.948664 | orchestrator | Wednesday 04 February 2026 03:31:39 +0000 (0:00:00.269) 0:01:27.653 **** 2026-02-04 03:33:21.948675 | orchestrator | 2026-02-04 03:33:21.948686 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 03:33:21.948697 | orchestrator | Wednesday 04 February 2026 03:31:39 +0000 (0:00:00.073) 0:01:27.727 **** 2026-02-04 03:33:21.948708 | orchestrator | 2026-02-04 03:33:21.948719 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 03:33:21.948729 | orchestrator | Wednesday 04 February 2026 03:31:39 +0000 (0:00:00.070) 0:01:27.797 **** 2026-02-04 03:33:21.948740 | orchestrator | 2026-02-04 03:33:21.948751 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 03:33:21.948762 | orchestrator | Wednesday 04 February 2026 03:31:39 +0000 (0:00:00.066) 0:01:27.863 **** 2026-02-04 03:33:21.948772 | orchestrator | 2026-02-04 03:33:21.948783 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 03:33:21.948794 | orchestrator | Wednesday 04 February 2026 03:31:39 +0000 (0:00:00.066) 0:01:27.930 **** 2026-02-04 03:33:21.948805 | orchestrator | 2026-02-04 03:33:21.948815 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-04 03:33:21.948826 | orchestrator | Wednesday 04 February 2026 03:31:40 +0000 (0:00:00.070) 0:01:28.001 **** 2026-02-04 03:33:21.948837 | orchestrator | 2026-02-04 03:33:21.948847 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-04 03:33:21.948858 | orchestrator | Wednesday 04 February 2026 03:31:40 +0000 (0:00:00.093) 0:01:28.095 **** 2026-02-04 03:33:21.948870 | orchestrator | changed: [testbed-manager] 2026-02-04 03:33:21.948882 | orchestrator | 2026-02-04 03:33:21.948893 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-04 03:33:21.948904 | orchestrator | Wednesday 04 February 2026 03:32:02 +0000 (0:00:22.722) 0:01:50.817 **** 2026-02-04 03:33:21.948915 | orchestrator | changed: [testbed-manager] 2026-02-04 03:33:21.948926 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:33:21.948936 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:33:21.948948 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:33:21.948959 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:33:21.949050 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:33:21.949067 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:33:21.949079 | orchestrator | 2026-02-04 03:33:21.949092 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-04 03:33:21.949105 | orchestrator | Wednesday 04 February 2026 03:32:16 +0000 (0:00:13.467) 0:02:04.285 **** 2026-02-04 03:33:21.949117 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:33:21.949130 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:33:21.949143 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:33:21.949155 | orchestrator | 2026-02-04 03:33:21.949167 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-04 03:33:21.949180 | orchestrator | Wednesday 04 February 2026 03:32:21 +0000 (0:00:05.381) 0:02:09.666 **** 2026-02-04 03:33:21.949192 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:33:21.949205 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:33:21.949217 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:33:21.949229 | orchestrator | 2026-02-04 03:33:21.949242 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-04 03:33:21.949255 | orchestrator | Wednesday 04 February 2026 03:32:32 +0000 (0:00:10.702) 0:02:20.369 **** 2026-02-04 03:33:21.949268 | orchestrator | changed: [testbed-manager] 2026-02-04 03:33:21.949280 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:33:21.949292 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:33:21.949305 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:33:21.949317 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:33:21.949330 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:33:21.949342 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:33:21.949355 | orchestrator | 2026-02-04 03:33:21.949382 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-04 03:33:21.949393 | orchestrator | Wednesday 04 February 2026 03:32:46 +0000 (0:00:13.988) 0:02:34.357 **** 2026-02-04 03:33:21.949405 | orchestrator | changed: [testbed-manager] 2026-02-04 03:33:21.949415 | orchestrator | 2026-02-04 03:33:21.949426 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-04 03:33:21.949437 | orchestrator | Wednesday 04 February 2026 03:32:54 +0000 (0:00:08.477) 0:02:42.834 **** 2026-02-04 03:33:21.949448 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:33:21.949458 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:33:21.949469 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:33:21.949480 | orchestrator | 2026-02-04 03:33:21.949490 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-04 03:33:21.949501 | orchestrator | Wednesday 04 February 2026 03:33:05 +0000 (0:00:10.649) 0:02:53.484 **** 2026-02-04 03:33:21.949512 | orchestrator | changed: [testbed-manager] 2026-02-04 03:33:21.949522 | orchestrator | 2026-02-04 03:33:21.949533 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-04 03:33:21.949544 | orchestrator | Wednesday 04 February 2026 03:33:11 +0000 (0:00:05.494) 0:02:58.979 **** 2026-02-04 03:33:21.949555 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:33:21.949566 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:33:21.949576 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:33:21.949587 | orchestrator | 2026-02-04 03:33:21.949598 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:33:21.949610 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-04 03:33:21.949623 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 03:33:21.949651 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 03:33:21.949663 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-04 03:33:21.949683 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 03:33:21.949694 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 03:33:21.949705 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-04 03:33:21.949715 | orchestrator | 2026-02-04 03:33:21.949726 | orchestrator | 2026-02-04 03:33:21.949737 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:33:21.949748 | orchestrator | Wednesday 04 February 2026 03:33:21 +0000 (0:00:10.326) 0:03:09.305 **** 2026-02-04 03:33:21.949759 | orchestrator | =============================================================================== 2026-02-04 03:33:21.949770 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.50s 2026-02-04 03:33:21.949780 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.72s 2026-02-04 03:33:21.949791 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 17.07s 2026-02-04 03:33:21.949807 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.99s 2026-02-04 03:33:21.949825 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.47s 2026-02-04 03:33:21.949844 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.70s 2026-02-04 03:33:21.949859 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.65s 2026-02-04 03:33:21.949876 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.33s 2026-02-04 03:33:21.949893 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.48s 2026-02-04 03:33:21.949907 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.43s 2026-02-04 03:33:21.949924 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.67s 2026-02-04 03:33:21.949944 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.49s 2026-02-04 03:33:21.949956 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.38s 2026-02-04 03:33:21.949997 | orchestrator | prometheus : Check prometheus containers -------------------------------- 3.96s 2026-02-04 03:33:21.950080 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.99s 2026-02-04 03:33:21.950098 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.75s 2026-02-04 03:33:21.950108 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.19s 2026-02-04 03:33:21.950119 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.17s 2026-02-04 03:33:21.950130 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.78s 2026-02-04 03:33:21.950141 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.76s 2026-02-04 03:33:24.894354 | orchestrator | 2026-02-04 03:33:24 | INFO  | Task cbef37da-55e0-4885-bc00-168bcbf75f67 (grafana) was prepared for execution. 2026-02-04 03:33:24.894477 | orchestrator | 2026-02-04 03:33:24 | INFO  | It takes a moment until task cbef37da-55e0-4885-bc00-168bcbf75f67 (grafana) has been started and output is visible here. 2026-02-04 03:33:34.887700 | orchestrator | 2026-02-04 03:33:34.887817 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:33:34.887842 | orchestrator | 2026-02-04 03:33:34.887858 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:33:34.887872 | orchestrator | Wednesday 04 February 2026 03:33:29 +0000 (0:00:00.280) 0:00:00.280 **** 2026-02-04 03:33:34.887882 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:33:34.887915 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:33:34.887925 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:33:34.887934 | orchestrator | 2026-02-04 03:33:34.887942 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:33:34.887951 | orchestrator | Wednesday 04 February 2026 03:33:29 +0000 (0:00:00.338) 0:00:00.618 **** 2026-02-04 03:33:34.887960 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-04 03:33:34.887969 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-04 03:33:34.888003 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-04 03:33:34.888012 | orchestrator | 2026-02-04 03:33:34.888021 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-04 03:33:34.888030 | orchestrator | 2026-02-04 03:33:34.888039 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-04 03:33:34.888048 | orchestrator | Wednesday 04 February 2026 03:33:30 +0000 (0:00:00.455) 0:00:01.074 **** 2026-02-04 03:33:34.888057 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:33:34.888067 | orchestrator | 2026-02-04 03:33:34.888075 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-04 03:33:34.888084 | orchestrator | Wednesday 04 February 2026 03:33:30 +0000 (0:00:00.617) 0:00:01.691 **** 2026-02-04 03:33:34.888095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:34.888127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:34.888155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:34.888172 | orchestrator | 2026-02-04 03:33:34.888187 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-04 03:33:34.888203 | orchestrator | Wednesday 04 February 2026 03:33:31 +0000 (0:00:00.905) 0:00:02.596 **** 2026-02-04 03:33:34.888218 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-04 03:33:34.888230 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-04 03:33:34.888239 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:33:34.888257 | orchestrator | 2026-02-04 03:33:34.888269 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-04 03:33:34.888279 | orchestrator | Wednesday 04 February 2026 03:33:32 +0000 (0:00:00.838) 0:00:03.435 **** 2026-02-04 03:33:34.888303 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:33:34.888314 | orchestrator | 2026-02-04 03:33:34.888324 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-04 03:33:34.888335 | orchestrator | Wednesday 04 February 2026 03:33:32 +0000 (0:00:00.572) 0:00:04.007 **** 2026-02-04 03:33:34.888363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:34.888375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:34.888385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:34.888396 | orchestrator | 2026-02-04 03:33:34.888406 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-04 03:33:34.888416 | orchestrator | Wednesday 04 February 2026 03:33:34 +0000 (0:00:01.339) 0:00:05.347 **** 2026-02-04 03:33:34.888426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 03:33:34.888437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 03:33:34.888454 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:33:34.888469 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:33:34.888502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 03:33:41.575703 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:33:41.575792 | orchestrator | 2026-02-04 03:33:41.575800 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-04 03:33:41.575807 | orchestrator | Wednesday 04 February 2026 03:33:34 +0000 (0:00:00.569) 0:00:05.917 **** 2026-02-04 03:33:41.575813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 03:33:41.575820 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:33:41.575826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 03:33:41.575831 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:33:41.575835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-04 03:33:41.575840 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:33:41.575845 | orchestrator | 2026-02-04 03:33:41.575850 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-04 03:33:41.575854 | orchestrator | Wednesday 04 February 2026 03:33:35 +0000 (0:00:00.612) 0:00:06.529 **** 2026-02-04 03:33:41.575878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:41.575900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:41.575921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:41.575929 | orchestrator | 2026-02-04 03:33:41.575937 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-04 03:33:41.575944 | orchestrator | Wednesday 04 February 2026 03:33:36 +0000 (0:00:01.261) 0:00:07.791 **** 2026-02-04 03:33:41.575948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:41.575953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:41.575958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:33:41.575968 | orchestrator | 2026-02-04 03:33:41.575973 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-04 03:33:41.576010 | orchestrator | Wednesday 04 February 2026 03:33:38 +0000 (0:00:01.563) 0:00:09.354 **** 2026-02-04 03:33:41.576017 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:33:41.576024 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:33:41.576032 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:33:41.576039 | orchestrator | 2026-02-04 03:33:41.576045 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-04 03:33:41.576051 | orchestrator | Wednesday 04 February 2026 03:33:38 +0000 (0:00:00.334) 0:00:09.688 **** 2026-02-04 03:33:41.576058 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 03:33:41.576066 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 03:33:41.576072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-04 03:33:41.576079 | orchestrator | 2026-02-04 03:33:41.576085 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-04 03:33:41.576092 | orchestrator | Wednesday 04 February 2026 03:33:39 +0000 (0:00:01.237) 0:00:10.926 **** 2026-02-04 03:33:41.576103 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 03:33:41.576111 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 03:33:41.576120 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-04 03:33:41.576127 | orchestrator | 2026-02-04 03:33:41.576135 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-04 03:33:41.576150 | orchestrator | Wednesday 04 February 2026 03:33:41 +0000 (0:00:01.673) 0:00:12.599 **** 2026-02-04 03:33:47.933657 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:33:47.933764 | orchestrator | 2026-02-04 03:33:47.933780 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-04 03:33:47.933792 | orchestrator | Wednesday 04 February 2026 03:33:42 +0000 (0:00:00.748) 0:00:13.348 **** 2026-02-04 03:33:47.933802 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-04 03:33:47.933813 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-04 03:33:47.933823 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:33:47.933834 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:33:47.933844 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:33:47.933854 | orchestrator | 2026-02-04 03:33:47.933864 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-04 03:33:47.933874 | orchestrator | Wednesday 04 February 2026 03:33:43 +0000 (0:00:00.719) 0:00:14.068 **** 2026-02-04 03:33:47.933884 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:33:47.933894 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:33:47.933904 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:33:47.933913 | orchestrator | 2026-02-04 03:33:47.933923 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-04 03:33:47.933933 | orchestrator | Wednesday 04 February 2026 03:33:43 +0000 (0:00:00.360) 0:00:14.428 **** 2026-02-04 03:33:47.933946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098993, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7381544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098993, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7381544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098993, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7381544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1099059, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.752975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1099059, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.752975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1099059, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.752975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099009, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7396746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099009, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7396746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1099009, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7396746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1099062, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.755362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1099062, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.755362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:47.934229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1099062, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.755362, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.607896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1099025, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7456746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1099025, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7456746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1099025, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7456746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1099043, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7514143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1099043, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7514143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1099043, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7514143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098991, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.735076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098991, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.735076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098991, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.735076, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099004, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7381544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099004, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7381544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1099004, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7381544, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:51.608370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099010, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7415323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.526870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099010, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7415323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.526977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1099010, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7415323, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1099030, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.747413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1099030, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.747413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1099030, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.747413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1099056, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7516747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1099056, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7516747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1099056, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7516747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099005, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7396746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099005, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7396746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1099005, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7396746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1099039, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.749316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:55.527238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1099039, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.749316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.743912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1099039, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.749316, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1099027, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.747413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1099027, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.747413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1099027, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.747413, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1099021, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7436745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1099021, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7436745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1099021, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7436745, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1099018, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7435467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1099018, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7435467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1099018, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7435467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1099033, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.748845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1099033, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.748845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:33:59.744307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1099033, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.748845, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.309929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1099013, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7422326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1099013, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7422326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1099013, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7422326, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1099053, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7516747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1099053, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7516747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1099053, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7516747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099242, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.799675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099242, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.799675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1099242, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.799675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099099, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.774675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099099, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.774675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1099099, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.774675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:03.310314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099085, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7588983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099085, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7588983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1099085, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7588983, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1099161, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7799723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1099161, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7799723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1099161, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7799723, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099073, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7556746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099073, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7556746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1099073, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7556746, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099196, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.789675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099196, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.789675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1099196, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.789675, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099165, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.787321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:07.494737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099165, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.787321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1099165, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.787321, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1099198, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7910862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1099198, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7910862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1099198, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7910862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099235, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.799039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099235, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.799039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1099235, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.799039, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1099193, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7886748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1099193, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7886748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099155, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7778795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1099193, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7886748, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099155, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7778795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:11.091931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099093, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.761154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1099155, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7778795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099093, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.761154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099151, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7778795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1099093, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.761154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099151, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7778795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099088, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7605717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099088, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7605717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1099151, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7778795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1099157, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7790964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1099088, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7605717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1099157, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7790964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099213, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7968884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:14.821747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099213, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7968884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.073754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1099157, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7790964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.073879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099207, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.792975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.073899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099207, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.792975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.073913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1099213, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7968884, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.073923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099077, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7573435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.073933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099077, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7573435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.073980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1099207, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.792975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.074098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099082, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7576747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.074123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099082, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7576747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.074133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1099077, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7573435, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.074145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099187, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.788519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.074161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099187, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.788519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:34:19.074189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1099082, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7576747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:36:06.379767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1099200, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7921634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:36:06.379981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1099200, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7921634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:36:06.380013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1099187, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.788519, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:36:06.380035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1099200, 'dev': 162, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770168806.7921634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-04 03:36:06.380095 | orchestrator | 2026-02-04 03:36:06.380118 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-04 03:36:06.380139 | orchestrator | Wednesday 04 February 2026 03:34:20 +0000 (0:00:37.301) 0:00:51.729 **** 2026-02-04 03:36:06.380158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:36:06.380234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:36:06.380257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-04 03:36:06.380276 | orchestrator | 2026-02-04 03:36:06.380295 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-04 03:36:06.380322 | orchestrator | Wednesday 04 February 2026 03:34:21 +0000 (0:00:00.964) 0:00:52.694 **** 2026-02-04 03:36:06.380342 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:36:06.380363 | orchestrator | 2026-02-04 03:36:06.380382 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-04 03:36:06.380400 | orchestrator | Wednesday 04 February 2026 03:34:23 +0000 (0:00:02.326) 0:00:55.020 **** 2026-02-04 03:36:06.380418 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:36:06.380437 | orchestrator | 2026-02-04 03:36:06.380456 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 03:36:06.380476 | orchestrator | Wednesday 04 February 2026 03:34:26 +0000 (0:00:02.207) 0:00:57.227 **** 2026-02-04 03:36:06.380495 | orchestrator | 2026-02-04 03:36:06.380513 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 03:36:06.380531 | orchestrator | Wednesday 04 February 2026 03:34:26 +0000 (0:00:00.088) 0:00:57.316 **** 2026-02-04 03:36:06.380549 | orchestrator | 2026-02-04 03:36:06.380568 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-04 03:36:06.380587 | orchestrator | Wednesday 04 February 2026 03:34:26 +0000 (0:00:00.074) 0:00:57.391 **** 2026-02-04 03:36:06.380607 | orchestrator | 2026-02-04 03:36:06.380626 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-04 03:36:06.380644 | orchestrator | Wednesday 04 February 2026 03:34:26 +0000 (0:00:00.073) 0:00:57.465 **** 2026-02-04 03:36:06.380663 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:36:06.380682 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:36:06.380700 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:36:06.380719 | orchestrator | 2026-02-04 03:36:06.380736 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-04 03:36:06.380755 | orchestrator | Wednesday 04 February 2026 03:34:33 +0000 (0:00:07.297) 0:01:04.763 **** 2026-02-04 03:36:06.380773 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:36:06.380803 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:36:06.380822 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-04 03:36:06.380841 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-04 03:36:06.380858 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-04 03:36:06.380875 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-04 03:36:06.380893 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:36:06.380913 | orchestrator | 2026-02-04 03:36:06.380931 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-04 03:36:06.380950 | orchestrator | Wednesday 04 February 2026 03:35:23 +0000 (0:00:50.018) 0:01:54.782 **** 2026-02-04 03:36:06.380969 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:36:06.380987 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:36:06.381005 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:36:06.381023 | orchestrator | 2026-02-04 03:36:06.381079 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-04 03:36:06.381100 | orchestrator | Wednesday 04 February 2026 03:36:01 +0000 (0:00:37.605) 0:02:32.387 **** 2026-02-04 03:36:06.381118 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:36:06.381137 | orchestrator | 2026-02-04 03:36:06.381155 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-04 03:36:06.381173 | orchestrator | Wednesday 04 February 2026 03:36:03 +0000 (0:00:02.088) 0:02:34.476 **** 2026-02-04 03:36:06.381191 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:36:06.381208 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:36:06.381219 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:36:06.381230 | orchestrator | 2026-02-04 03:36:06.381241 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-04 03:36:06.381252 | orchestrator | Wednesday 04 February 2026 03:36:03 +0000 (0:00:00.304) 0:02:34.781 **** 2026-02-04 03:36:06.381265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-04 03:36:06.381290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-04 03:36:07.004412 | orchestrator | 2026-02-04 03:36:07.004511 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-04 03:36:07.004527 | orchestrator | Wednesday 04 February 2026 03:36:06 +0000 (0:00:02.620) 0:02:37.402 **** 2026-02-04 03:36:07.004540 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:36:07.004552 | orchestrator | 2026-02-04 03:36:07.004563 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:36:07.004576 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 03:36:07.004588 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 03:36:07.004618 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 03:36:07.004630 | orchestrator | 2026-02-04 03:36:07.004641 | orchestrator | 2026-02-04 03:36:07.004652 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:36:07.004686 | orchestrator | Wednesday 04 February 2026 03:36:06 +0000 (0:00:00.290) 0:02:37.692 **** 2026-02-04 03:36:07.004698 | orchestrator | =============================================================================== 2026-02-04 03:36:07.004709 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.02s 2026-02-04 03:36:07.004720 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 37.61s 2026-02-04 03:36:07.004731 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.30s 2026-02-04 03:36:07.004742 | orchestrator | grafana : Restart first grafana container ------------------------------- 7.30s 2026-02-04 03:36:07.004752 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.62s 2026-02-04 03:36:07.004763 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.33s 2026-02-04 03:36:07.004774 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.21s 2026-02-04 03:36:07.004785 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.09s 2026-02-04 03:36:07.004795 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.67s 2026-02-04 03:36:07.004806 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.56s 2026-02-04 03:36:07.004817 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.34s 2026-02-04 03:36:07.004827 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2026-02-04 03:36:07.004838 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.24s 2026-02-04 03:36:07.004849 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.96s 2026-02-04 03:36:07.004860 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.91s 2026-02-04 03:36:07.004870 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.84s 2026-02-04 03:36:07.004881 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.75s 2026-02-04 03:36:07.004891 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.72s 2026-02-04 03:36:07.004902 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.62s 2026-02-04 03:36:07.004913 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.61s 2026-02-04 03:36:07.331118 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-04 03:36:07.338606 | orchestrator | + set -e 2026-02-04 03:36:07.338670 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 03:36:07.339483 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 03:36:07.339516 | orchestrator | ++ INTERACTIVE=false 2026-02-04 03:36:07.339527 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 03:36:07.339535 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 03:36:07.339543 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 03:36:07.340681 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 03:36:07.340720 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 03:36:07.340735 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 03:36:07.340746 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 03:36:07.340758 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 03:36:07.340772 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 03:36:07.340783 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 03:36:07.340795 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 03:36:07.340806 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 03:36:07.340819 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 03:36:07.340832 | orchestrator | ++ export ARA=false 2026-02-04 03:36:07.340843 | orchestrator | ++ ARA=false 2026-02-04 03:36:07.340855 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 03:36:07.340867 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 03:36:07.340878 | orchestrator | ++ export TEMPEST=false 2026-02-04 03:36:07.340890 | orchestrator | ++ TEMPEST=false 2026-02-04 03:36:07.340901 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 03:36:07.340913 | orchestrator | ++ IS_ZUUL=true 2026-02-04 03:36:07.340926 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:36:07.340938 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:36:07.340950 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 03:36:07.340961 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 03:36:07.341002 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 03:36:07.341015 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 03:36:07.341029 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 03:36:07.341079 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 03:36:07.341093 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 03:36:07.341104 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 03:36:07.342271 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-04 03:36:07.410142 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 03:36:07.410242 | orchestrator | + osism apply clusterapi 2026-02-04 03:36:09.514384 | orchestrator | 2026-02-04 03:36:09 | INFO  | Task a819e09f-89b8-45cf-b740-8511d0988f27 (clusterapi) was prepared for execution. 2026-02-04 03:36:09.514521 | orchestrator | 2026-02-04 03:36:09 | INFO  | It takes a moment until task a819e09f-89b8-45cf-b740-8511d0988f27 (clusterapi) has been started and output is visible here. 2026-02-04 03:37:18.231167 | orchestrator | 2026-02-04 03:37:18.231274 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-04 03:37:18.231289 | orchestrator | 2026-02-04 03:37:18.231298 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-04 03:37:18.231308 | orchestrator | Wednesday 04 February 2026 03:36:13 +0000 (0:00:00.187) 0:00:00.187 **** 2026-02-04 03:37:18.231318 | orchestrator | included: cert_manager for testbed-manager 2026-02-04 03:37:18.231327 | orchestrator | 2026-02-04 03:37:18.231336 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-04 03:37:18.231345 | orchestrator | Wednesday 04 February 2026 03:36:14 +0000 (0:00:00.250) 0:00:00.437 **** 2026-02-04 03:37:18.231354 | orchestrator | changed: [testbed-manager] 2026-02-04 03:37:18.231364 | orchestrator | 2026-02-04 03:37:18.231373 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-04 03:37:18.231381 | orchestrator | Wednesday 04 February 2026 03:36:19 +0000 (0:00:05.539) 0:00:05.977 **** 2026-02-04 03:37:18.231390 | orchestrator | changed: [testbed-manager] 2026-02-04 03:37:18.231399 | orchestrator | 2026-02-04 03:37:18.231408 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-04 03:37:18.231416 | orchestrator | 2026-02-04 03:37:18.231440 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-04 03:37:18.231449 | orchestrator | Wednesday 04 February 2026 03:36:56 +0000 (0:00:37.147) 0:00:43.125 **** 2026-02-04 03:37:18.231458 | orchestrator | ok: [testbed-manager] 2026-02-04 03:37:18.231467 | orchestrator | 2026-02-04 03:37:18.231476 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-04 03:37:18.231484 | orchestrator | Wednesday 04 February 2026 03:36:57 +0000 (0:00:01.105) 0:00:44.231 **** 2026-02-04 03:37:18.231493 | orchestrator | ok: [testbed-manager] 2026-02-04 03:37:18.231502 | orchestrator | 2026-02-04 03:37:18.231511 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-04 03:37:18.231520 | orchestrator | Wednesday 04 February 2026 03:36:58 +0000 (0:00:00.169) 0:00:44.400 **** 2026-02-04 03:37:18.231529 | orchestrator | ok: [testbed-manager] 2026-02-04 03:37:18.231537 | orchestrator | 2026-02-04 03:37:18.231546 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-04 03:37:18.231555 | orchestrator | Wednesday 04 February 2026 03:37:15 +0000 (0:00:17.298) 0:01:01.698 **** 2026-02-04 03:37:18.231577 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:37:18.231586 | orchestrator | 2026-02-04 03:37:18.231603 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-04 03:37:18.231614 | orchestrator | Wednesday 04 February 2026 03:37:15 +0000 (0:00:00.143) 0:01:01.841 **** 2026-02-04 03:37:18.231625 | orchestrator | changed: [testbed-manager] 2026-02-04 03:37:18.231636 | orchestrator | 2026-02-04 03:37:18.231647 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:37:18.231659 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 03:37:18.231671 | orchestrator | 2026-02-04 03:37:18.231685 | orchestrator | 2026-02-04 03:37:18.231724 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:37:18.231737 | orchestrator | Wednesday 04 February 2026 03:37:17 +0000 (0:00:02.361) 0:01:04.202 **** 2026-02-04 03:37:18.231750 | orchestrator | =============================================================================== 2026-02-04 03:37:18.231761 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 37.15s 2026-02-04 03:37:18.231772 | orchestrator | Initialize the CAPI management cluster --------------------------------- 17.30s 2026-02-04 03:37:18.231782 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.54s 2026-02-04 03:37:18.231793 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.36s 2026-02-04 03:37:18.231804 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.11s 2026-02-04 03:37:18.231815 | orchestrator | Include cert_manager role ----------------------------------------------- 0.25s 2026-02-04 03:37:18.231825 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.17s 2026-02-04 03:37:18.231836 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-02-04 03:37:18.578358 | orchestrator | + osism apply magnum 2026-02-04 03:37:20.648601 | orchestrator | 2026-02-04 03:37:20 | INFO  | Task 04eac855-744b-4e2b-860c-726c4b486a3e (magnum) was prepared for execution. 2026-02-04 03:37:20.649255 | orchestrator | 2026-02-04 03:37:20 | INFO  | It takes a moment until task 04eac855-744b-4e2b-860c-726c4b486a3e (magnum) has been started and output is visible here. 2026-02-04 03:38:03.019366 | orchestrator | 2026-02-04 03:38:03.019485 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:38:03.019499 | orchestrator | 2026-02-04 03:38:03.019510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:38:03.019522 | orchestrator | Wednesday 04 February 2026 03:37:25 +0000 (0:00:00.293) 0:00:00.293 **** 2026-02-04 03:38:03.019532 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:38:03.019543 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:38:03.019553 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:38:03.019563 | orchestrator | 2026-02-04 03:38:03.019574 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:38:03.019584 | orchestrator | Wednesday 04 February 2026 03:37:25 +0000 (0:00:00.354) 0:00:00.648 **** 2026-02-04 03:38:03.019595 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-04 03:38:03.019606 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-04 03:38:03.019617 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-04 03:38:03.019627 | orchestrator | 2026-02-04 03:38:03.019638 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-04 03:38:03.019648 | orchestrator | 2026-02-04 03:38:03.019658 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 03:38:03.019669 | orchestrator | Wednesday 04 February 2026 03:37:25 +0000 (0:00:00.509) 0:00:01.157 **** 2026-02-04 03:38:03.019679 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:38:03.019690 | orchestrator | 2026-02-04 03:38:03.019700 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-04 03:38:03.019711 | orchestrator | Wednesday 04 February 2026 03:37:26 +0000 (0:00:00.646) 0:00:01.804 **** 2026-02-04 03:38:03.019722 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-04 03:38:03.019732 | orchestrator | 2026-02-04 03:38:03.019742 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-04 03:38:03.019752 | orchestrator | Wednesday 04 February 2026 03:37:30 +0000 (0:00:03.458) 0:00:05.262 **** 2026-02-04 03:38:03.019762 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-04 03:38:03.019773 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-04 03:38:03.019810 | orchestrator | 2026-02-04 03:38:03.019833 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-04 03:38:03.019843 | orchestrator | Wednesday 04 February 2026 03:37:36 +0000 (0:00:06.378) 0:00:11.641 **** 2026-02-04 03:38:03.019853 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-04 03:38:03.019936 | orchestrator | 2026-02-04 03:38:03.019991 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-04 03:38:03.020002 | orchestrator | Wednesday 04 February 2026 03:37:39 +0000 (0:00:03.412) 0:00:15.053 **** 2026-02-04 03:38:03.020012 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-04 03:38:03.020023 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-04 03:38:03.020033 | orchestrator | 2026-02-04 03:38:03.020044 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-04 03:38:03.020054 | orchestrator | Wednesday 04 February 2026 03:37:43 +0000 (0:00:03.856) 0:00:18.910 **** 2026-02-04 03:38:03.020065 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-04 03:38:03.020075 | orchestrator | 2026-02-04 03:38:03.020086 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-04 03:38:03.020118 | orchestrator | Wednesday 04 February 2026 03:37:46 +0000 (0:00:03.288) 0:00:22.198 **** 2026-02-04 03:38:03.020129 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-04 03:38:03.020139 | orchestrator | 2026-02-04 03:38:03.020149 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-04 03:38:03.020159 | orchestrator | Wednesday 04 February 2026 03:37:50 +0000 (0:00:03.689) 0:00:25.888 **** 2026-02-04 03:38:03.020170 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:38:03.020180 | orchestrator | 2026-02-04 03:38:03.020191 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-04 03:38:03.020202 | orchestrator | Wednesday 04 February 2026 03:37:54 +0000 (0:00:03.419) 0:00:29.307 **** 2026-02-04 03:38:03.020213 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:38:03.020222 | orchestrator | 2026-02-04 03:38:03.020232 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-04 03:38:03.020243 | orchestrator | Wednesday 04 February 2026 03:37:58 +0000 (0:00:03.951) 0:00:33.259 **** 2026-02-04 03:38:03.020252 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:38:03.020262 | orchestrator | 2026-02-04 03:38:03.020272 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-04 03:38:03.020282 | orchestrator | Wednesday 04 February 2026 03:38:01 +0000 (0:00:03.382) 0:00:36.641 **** 2026-02-04 03:38:03.020316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:03.020331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:03.020358 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:03.020369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:03.020381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:03.020397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:10.338071 | orchestrator | 2026-02-04 03:38:10.338198 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-04 03:38:10.338215 | orchestrator | Wednesday 04 February 2026 03:38:02 +0000 (0:00:01.609) 0:00:38.251 **** 2026-02-04 03:38:10.338226 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:38:10.338237 | orchestrator | 2026-02-04 03:38:10.338273 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-04 03:38:10.338283 | orchestrator | Wednesday 04 February 2026 03:38:03 +0000 (0:00:00.163) 0:00:38.415 **** 2026-02-04 03:38:10.338293 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:38:10.338303 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:38:10.338313 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:38:10.338322 | orchestrator | 2026-02-04 03:38:10.338332 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-04 03:38:10.338342 | orchestrator | Wednesday 04 February 2026 03:38:03 +0000 (0:00:00.323) 0:00:38.738 **** 2026-02-04 03:38:10.338351 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 03:38:10.338361 | orchestrator | 2026-02-04 03:38:10.338371 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-04 03:38:10.338380 | orchestrator | Wednesday 04 February 2026 03:38:04 +0000 (0:00:00.829) 0:00:39.568 **** 2026-02-04 03:38:10.338406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:10.338422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:10.338433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:10.338462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:10.338482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:10.338497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:10.338508 | orchestrator | 2026-02-04 03:38:10.338518 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-04 03:38:10.338528 | orchestrator | Wednesday 04 February 2026 03:38:06 +0000 (0:00:02.434) 0:00:42.002 **** 2026-02-04 03:38:10.338538 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:38:10.338548 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:38:10.338558 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:38:10.338570 | orchestrator | 2026-02-04 03:38:10.338581 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 03:38:10.338593 | orchestrator | Wednesday 04 February 2026 03:38:07 +0000 (0:00:00.505) 0:00:42.507 **** 2026-02-04 03:38:10.338605 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:38:10.338616 | orchestrator | 2026-02-04 03:38:10.338628 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-04 03:38:10.338639 | orchestrator | Wednesday 04 February 2026 03:38:07 +0000 (0:00:00.584) 0:00:43.091 **** 2026-02-04 03:38:10.338650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:10.338675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:11.372925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:11.373047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:11.373064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:11.373076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:11.373178 | orchestrator | 2026-02-04 03:38:11.373195 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-04 03:38:11.373207 | orchestrator | Wednesday 04 February 2026 03:38:10 +0000 (0:00:02.497) 0:00:45.589 **** 2026-02-04 03:38:11.373239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:11.373252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:11.373264 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:38:11.373283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:11.373295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:11.373306 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:38:11.373317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:11.373344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:14.915973 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:38:14.916086 | orchestrator | 2026-02-04 03:38:14.916130 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-04 03:38:14.916143 | orchestrator | Wednesday 04 February 2026 03:38:11 +0000 (0:00:01.028) 0:00:46.617 **** 2026-02-04 03:38:14.916174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:14.916190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:14.916203 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:38:14.916216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:14.916253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:14.916266 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:38:14.916297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:14.916315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:14.916327 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:38:14.916338 | orchestrator | 2026-02-04 03:38:14.916350 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-04 03:38:14.916361 | orchestrator | Wednesday 04 February 2026 03:38:12 +0000 (0:00:00.973) 0:00:47.591 **** 2026-02-04 03:38:14.916373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:14.916393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:14.916413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:20.944546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:20.944679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:20.944699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:20.944737 | orchestrator | 2026-02-04 03:38:20.944751 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-04 03:38:20.944764 | orchestrator | Wednesday 04 February 2026 03:38:14 +0000 (0:00:02.574) 0:00:50.165 **** 2026-02-04 03:38:20.944776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:20.944807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:20.944825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:20.944837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:20.944857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:20.944869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:20.944880 | orchestrator | 2026-02-04 03:38:20.944891 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-04 03:38:20.944902 | orchestrator | Wednesday 04 February 2026 03:38:20 +0000 (0:00:05.349) 0:00:55.514 **** 2026-02-04 03:38:20.944922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:22.996858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:22.996990 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:38:22.997021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:22.997062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:22.997076 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:38:22.997088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-04 03:38:22.997150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 03:38:22.997165 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:38:22.997176 | orchestrator | 2026-02-04 03:38:22.997189 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-04 03:38:22.997201 | orchestrator | Wednesday 04 February 2026 03:38:20 +0000 (0:00:00.682) 0:00:56.197 **** 2026-02-04 03:38:22.997221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:22.997242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:22.997255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-04 03:38:22.997267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:38:22.997289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:39:20.352695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-04 03:39:20.352798 | orchestrator | 2026-02-04 03:39:20.352808 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-04 03:39:20.352815 | orchestrator | Wednesday 04 February 2026 03:38:22 +0000 (0:00:02.045) 0:00:58.242 **** 2026-02-04 03:39:20.352821 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:39:20.352828 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:39:20.352833 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:39:20.352839 | orchestrator | 2026-02-04 03:39:20.352844 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-04 03:39:20.352850 | orchestrator | Wednesday 04 February 2026 03:38:23 +0000 (0:00:00.550) 0:00:58.793 **** 2026-02-04 03:39:20.352856 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:39:20.352861 | orchestrator | 2026-02-04 03:39:20.352866 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-04 03:39:20.352872 | orchestrator | Wednesday 04 February 2026 03:38:25 +0000 (0:00:02.139) 0:01:00.933 **** 2026-02-04 03:39:20.352877 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:39:20.352883 | orchestrator | 2026-02-04 03:39:20.352889 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-04 03:39:20.352894 | orchestrator | Wednesday 04 February 2026 03:38:27 +0000 (0:00:02.208) 0:01:03.141 **** 2026-02-04 03:39:20.352899 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:39:20.352905 | orchestrator | 2026-02-04 03:39:20.352910 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 03:39:20.352916 | orchestrator | Wednesday 04 February 2026 03:38:44 +0000 (0:00:16.484) 0:01:19.626 **** 2026-02-04 03:39:20.352921 | orchestrator | 2026-02-04 03:39:20.352927 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 03:39:20.352932 | orchestrator | Wednesday 04 February 2026 03:38:44 +0000 (0:00:00.072) 0:01:19.698 **** 2026-02-04 03:39:20.352938 | orchestrator | 2026-02-04 03:39:20.352943 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-04 03:39:20.352949 | orchestrator | Wednesday 04 February 2026 03:38:44 +0000 (0:00:00.072) 0:01:19.771 **** 2026-02-04 03:39:20.352954 | orchestrator | 2026-02-04 03:39:20.352959 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-04 03:39:20.352965 | orchestrator | Wednesday 04 February 2026 03:38:44 +0000 (0:00:00.073) 0:01:19.845 **** 2026-02-04 03:39:20.352970 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:39:20.352976 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:39:20.352981 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:39:20.352987 | orchestrator | 2026-02-04 03:39:20.352992 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-04 03:39:20.352998 | orchestrator | Wednesday 04 February 2026 03:39:03 +0000 (0:00:19.264) 0:01:39.109 **** 2026-02-04 03:39:20.353003 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:39:20.353008 | orchestrator | changed: [testbed-node-1] 2026-02-04 03:39:20.353014 | orchestrator | changed: [testbed-node-2] 2026-02-04 03:39:20.353019 | orchestrator | 2026-02-04 03:39:20.353025 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:39:20.353031 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 03:39:20.353038 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 03:39:20.353049 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-04 03:39:20.353055 | orchestrator | 2026-02-04 03:39:20.353060 | orchestrator | 2026-02-04 03:39:20.353066 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:39:20.353071 | orchestrator | Wednesday 04 February 2026 03:39:19 +0000 (0:00:16.103) 0:01:55.212 **** 2026-02-04 03:39:20.353077 | orchestrator | =============================================================================== 2026-02-04 03:39:20.353083 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 19.26s 2026-02-04 03:39:20.353088 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.48s 2026-02-04 03:39:20.353094 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.10s 2026-02-04 03:39:20.353099 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.38s 2026-02-04 03:39:20.353104 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.35s 2026-02-04 03:39:20.353110 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.95s 2026-02-04 03:39:20.353115 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.86s 2026-02-04 03:39:20.353169 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.69s 2026-02-04 03:39:20.353176 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.46s 2026-02-04 03:39:20.353182 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.42s 2026-02-04 03:39:20.353187 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.41s 2026-02-04 03:39:20.353198 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.38s 2026-02-04 03:39:20.353203 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.29s 2026-02-04 03:39:20.353209 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.57s 2026-02-04 03:39:20.353214 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.50s 2026-02-04 03:39:20.353220 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.43s 2026-02-04 03:39:20.353225 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.21s 2026-02-04 03:39:20.353230 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.14s 2026-02-04 03:39:20.353244 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.05s 2026-02-04 03:39:20.353251 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.61s 2026-02-04 03:39:21.054551 | orchestrator | ok: Runtime: 1:42:25.280751 2026-02-04 03:39:21.291850 | 2026-02-04 03:39:21.291993 | TASK [Deploy in a nutshell] 2026-02-04 03:39:21.827684 | orchestrator | skipping: Conditional result was False 2026-02-04 03:39:21.851827 | 2026-02-04 03:39:21.851994 | TASK [Bootstrap services] 2026-02-04 03:39:22.571987 | orchestrator | 2026-02-04 03:39:22.572160 | orchestrator | # BOOTSTRAP 2026-02-04 03:39:22.572174 | orchestrator | 2026-02-04 03:39:22.572179 | orchestrator | + set -e 2026-02-04 03:39:22.572184 | orchestrator | + echo 2026-02-04 03:39:22.572190 | orchestrator | + echo '# BOOTSTRAP' 2026-02-04 03:39:22.572197 | orchestrator | + echo 2026-02-04 03:39:22.572217 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-04 03:39:22.580857 | orchestrator | + set -e 2026-02-04 03:39:22.580918 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-04 03:39:24.762321 | orchestrator | 2026-02-04 03:39:24 | INFO  | It takes a moment until task 83524643-77e3-4c84-a323-b959803592b6 (flavor-manager) has been started and output is visible here. 2026-02-04 03:39:32.285096 | orchestrator | 2026-02-04 03:39:27 | INFO  | Flavor SCS-1L-1 created 2026-02-04 03:39:32.285197 | orchestrator | 2026-02-04 03:39:28 | INFO  | Flavor SCS-1L-1-5 created 2026-02-04 03:39:32.285208 | orchestrator | 2026-02-04 03:39:28 | INFO  | Flavor SCS-1V-2 created 2026-02-04 03:39:32.285212 | orchestrator | 2026-02-04 03:39:28 | INFO  | Flavor SCS-1V-2-5 created 2026-02-04 03:39:32.285216 | orchestrator | 2026-02-04 03:39:28 | INFO  | Flavor SCS-1V-4 created 2026-02-04 03:39:32.285221 | orchestrator | 2026-02-04 03:39:28 | INFO  | Flavor SCS-1V-4-10 created 2026-02-04 03:39:32.285225 | orchestrator | 2026-02-04 03:39:29 | INFO  | Flavor SCS-1V-8 created 2026-02-04 03:39:32.285229 | orchestrator | 2026-02-04 03:39:29 | INFO  | Flavor SCS-1V-8-20 created 2026-02-04 03:39:32.285243 | orchestrator | 2026-02-04 03:39:29 | INFO  | Flavor SCS-2V-4 created 2026-02-04 03:39:32.285247 | orchestrator | 2026-02-04 03:39:29 | INFO  | Flavor SCS-2V-4-10 created 2026-02-04 03:39:32.285251 | orchestrator | 2026-02-04 03:39:29 | INFO  | Flavor SCS-2V-8 created 2026-02-04 03:39:32.285255 | orchestrator | 2026-02-04 03:39:29 | INFO  | Flavor SCS-2V-8-20 created 2026-02-04 03:39:32.285259 | orchestrator | 2026-02-04 03:39:29 | INFO  | Flavor SCS-2V-16 created 2026-02-04 03:39:32.285263 | orchestrator | 2026-02-04 03:39:29 | INFO  | Flavor SCS-2V-16-50 created 2026-02-04 03:39:32.285266 | orchestrator | 2026-02-04 03:39:30 | INFO  | Flavor SCS-4V-8 created 2026-02-04 03:39:32.285270 | orchestrator | 2026-02-04 03:39:30 | INFO  | Flavor SCS-4V-8-20 created 2026-02-04 03:39:32.285274 | orchestrator | 2026-02-04 03:39:30 | INFO  | Flavor SCS-4V-16 created 2026-02-04 03:39:32.285278 | orchestrator | 2026-02-04 03:39:30 | INFO  | Flavor SCS-4V-16-50 created 2026-02-04 03:39:32.285282 | orchestrator | 2026-02-04 03:39:30 | INFO  | Flavor SCS-4V-32 created 2026-02-04 03:39:32.285286 | orchestrator | 2026-02-04 03:39:30 | INFO  | Flavor SCS-4V-32-100 created 2026-02-04 03:39:32.285289 | orchestrator | 2026-02-04 03:39:30 | INFO  | Flavor SCS-8V-16 created 2026-02-04 03:39:32.285293 | orchestrator | 2026-02-04 03:39:31 | INFO  | Flavor SCS-8V-16-50 created 2026-02-04 03:39:32.285297 | orchestrator | 2026-02-04 03:39:31 | INFO  | Flavor SCS-8V-32 created 2026-02-04 03:39:32.285301 | orchestrator | 2026-02-04 03:39:31 | INFO  | Flavor SCS-8V-32-100 created 2026-02-04 03:39:32.285305 | orchestrator | 2026-02-04 03:39:31 | INFO  | Flavor SCS-16V-32 created 2026-02-04 03:39:32.285308 | orchestrator | 2026-02-04 03:39:31 | INFO  | Flavor SCS-16V-32-100 created 2026-02-04 03:39:32.285312 | orchestrator | 2026-02-04 03:39:31 | INFO  | Flavor SCS-2V-4-20s created 2026-02-04 03:39:32.285316 | orchestrator | 2026-02-04 03:39:31 | INFO  | Flavor SCS-4V-8-50s created 2026-02-04 03:39:32.285320 | orchestrator | 2026-02-04 03:39:32 | INFO  | Flavor SCS-8V-32-100s created 2026-02-04 03:39:34.594676 | orchestrator | 2026-02-04 03:39:34 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-04 03:39:44.823789 | orchestrator | 2026-02-04 03:39:44 | INFO  | Task 6190c49d-e8fe-48f8-9d78-2275fadfdca6 (bootstrap-basic) was prepared for execution. 2026-02-04 03:39:44.824029 | orchestrator | 2026-02-04 03:39:44 | INFO  | It takes a moment until task 6190c49d-e8fe-48f8-9d78-2275fadfdca6 (bootstrap-basic) has been started and output is visible here. 2026-02-04 03:40:27.939668 | orchestrator | 2026-02-04 03:40:27.939788 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-04 03:40:27.939806 | orchestrator | 2026-02-04 03:40:27.939819 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 03:40:27.939831 | orchestrator | Wednesday 04 February 2026 03:39:49 +0000 (0:00:00.070) 0:00:00.070 **** 2026-02-04 03:40:27.939842 | orchestrator | ok: [localhost] 2026-02-04 03:40:27.939854 | orchestrator | 2026-02-04 03:40:27.939865 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-04 03:40:27.939876 | orchestrator | Wednesday 04 February 2026 03:39:51 +0000 (0:00:01.836) 0:00:01.906 **** 2026-02-04 03:40:27.939887 | orchestrator | ok: [localhost] 2026-02-04 03:40:27.939898 | orchestrator | 2026-02-04 03:40:27.939916 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-04 03:40:27.939935 | orchestrator | Wednesday 04 February 2026 03:39:57 +0000 (0:00:06.777) 0:00:08.684 **** 2026-02-04 03:40:27.939963 | orchestrator | changed: [localhost] 2026-02-04 03:40:27.939983 | orchestrator | 2026-02-04 03:40:27.940001 | orchestrator | TASK [Create public network] *************************************************** 2026-02-04 03:40:27.940021 | orchestrator | Wednesday 04 February 2026 03:40:04 +0000 (0:00:06.422) 0:00:15.106 **** 2026-02-04 03:40:27.940038 | orchestrator | changed: [localhost] 2026-02-04 03:40:27.940056 | orchestrator | 2026-02-04 03:40:27.940074 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-04 03:40:27.940093 | orchestrator | Wednesday 04 February 2026 03:40:09 +0000 (0:00:05.357) 0:00:20.464 **** 2026-02-04 03:40:27.940118 | orchestrator | changed: [localhost] 2026-02-04 03:40:27.940138 | orchestrator | 2026-02-04 03:40:27.940191 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-04 03:40:27.940213 | orchestrator | Wednesday 04 February 2026 03:40:15 +0000 (0:00:06.376) 0:00:26.840 **** 2026-02-04 03:40:27.940233 | orchestrator | changed: [localhost] 2026-02-04 03:40:27.940247 | orchestrator | 2026-02-04 03:40:27.940260 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-04 03:40:27.940272 | orchestrator | Wednesday 04 February 2026 03:40:20 +0000 (0:00:04.302) 0:00:31.143 **** 2026-02-04 03:40:27.940285 | orchestrator | changed: [localhost] 2026-02-04 03:40:27.940297 | orchestrator | 2026-02-04 03:40:27.940310 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-04 03:40:27.940335 | orchestrator | Wednesday 04 February 2026 03:40:24 +0000 (0:00:03.784) 0:00:34.928 **** 2026-02-04 03:40:27.940348 | orchestrator | ok: [localhost] 2026-02-04 03:40:27.940361 | orchestrator | 2026-02-04 03:40:27.940374 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:40:27.940387 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 03:40:27.940400 | orchestrator | 2026-02-04 03:40:27.940413 | orchestrator | 2026-02-04 03:40:27.940425 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:40:27.940439 | orchestrator | Wednesday 04 February 2026 03:40:27 +0000 (0:00:03.586) 0:00:38.514 **** 2026-02-04 03:40:27.940451 | orchestrator | =============================================================================== 2026-02-04 03:40:27.940464 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.78s 2026-02-04 03:40:27.940477 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.42s 2026-02-04 03:40:27.940490 | orchestrator | Set public network to default ------------------------------------------- 6.38s 2026-02-04 03:40:27.940502 | orchestrator | Create public network --------------------------------------------------- 5.36s 2026-02-04 03:40:27.940539 | orchestrator | Create public subnet ---------------------------------------------------- 4.30s 2026-02-04 03:40:27.940553 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.78s 2026-02-04 03:40:27.940567 | orchestrator | Create manager role ----------------------------------------------------- 3.59s 2026-02-04 03:40:27.940578 | orchestrator | Gathering Facts --------------------------------------------------------- 1.84s 2026-02-04 03:40:30.484146 | orchestrator | 2026-02-04 03:40:30 | INFO  | It takes a moment until task 3b664f88-6a79-459c-aed3-16adb3c86ccc (image-manager) has been started and output is visible here. 2026-02-04 03:41:13.027247 | orchestrator | 2026-02-04 03:40:33 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-04 03:41:13.027353 | orchestrator | 2026-02-04 03:40:33 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-04 03:41:13.027368 | orchestrator | 2026-02-04 03:40:33 | INFO  | Importing image Cirros 0.6.2 2026-02-04 03:41:13.027378 | orchestrator | 2026-02-04 03:40:33 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-04 03:41:13.027388 | orchestrator | 2026-02-04 03:40:35 | INFO  | Waiting for image to leave queued state... 2026-02-04 03:41:13.027397 | orchestrator | 2026-02-04 03:40:37 | INFO  | Waiting for import to complete... 2026-02-04 03:41:13.027406 | orchestrator | 2026-02-04 03:40:47 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-04 03:41:13.027416 | orchestrator | 2026-02-04 03:40:48 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-04 03:41:13.027425 | orchestrator | 2026-02-04 03:40:48 | INFO  | Setting internal_version = 0.6.2 2026-02-04 03:41:13.027434 | orchestrator | 2026-02-04 03:40:48 | INFO  | Setting image_original_user = cirros 2026-02-04 03:41:13.027443 | orchestrator | 2026-02-04 03:40:48 | INFO  | Adding tag os:cirros 2026-02-04 03:41:13.027452 | orchestrator | 2026-02-04 03:40:48 | INFO  | Setting property architecture: x86_64 2026-02-04 03:41:13.027461 | orchestrator | 2026-02-04 03:40:48 | INFO  | Setting property hw_disk_bus: scsi 2026-02-04 03:41:13.027469 | orchestrator | 2026-02-04 03:40:48 | INFO  | Setting property hw_rng_model: virtio 2026-02-04 03:41:13.027478 | orchestrator | 2026-02-04 03:40:49 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-04 03:41:13.027487 | orchestrator | 2026-02-04 03:40:49 | INFO  | Setting property hw_watchdog_action: reset 2026-02-04 03:41:13.027496 | orchestrator | 2026-02-04 03:40:49 | INFO  | Setting property hypervisor_type: qemu 2026-02-04 03:41:13.027505 | orchestrator | 2026-02-04 03:40:50 | INFO  | Setting property os_distro: cirros 2026-02-04 03:41:13.027513 | orchestrator | 2026-02-04 03:40:50 | INFO  | Setting property os_purpose: minimal 2026-02-04 03:41:13.027522 | orchestrator | 2026-02-04 03:40:50 | INFO  | Setting property replace_frequency: never 2026-02-04 03:41:13.027530 | orchestrator | 2026-02-04 03:40:50 | INFO  | Setting property uuid_validity: none 2026-02-04 03:41:13.027539 | orchestrator | 2026-02-04 03:40:51 | INFO  | Setting property provided_until: none 2026-02-04 03:41:13.027547 | orchestrator | 2026-02-04 03:40:51 | INFO  | Setting property image_description: Cirros 2026-02-04 03:41:13.027556 | orchestrator | 2026-02-04 03:40:51 | INFO  | Setting property image_name: Cirros 2026-02-04 03:41:13.027565 | orchestrator | 2026-02-04 03:40:51 | INFO  | Setting property internal_version: 0.6.2 2026-02-04 03:41:13.027573 | orchestrator | 2026-02-04 03:40:51 | INFO  | Setting property image_original_user: cirros 2026-02-04 03:41:13.027602 | orchestrator | 2026-02-04 03:40:52 | INFO  | Setting property os_version: 0.6.2 2026-02-04 03:41:13.027619 | orchestrator | 2026-02-04 03:40:52 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-04 03:41:13.027629 | orchestrator | 2026-02-04 03:40:52 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-04 03:41:13.027638 | orchestrator | 2026-02-04 03:40:52 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-04 03:41:13.027646 | orchestrator | 2026-02-04 03:40:52 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-04 03:41:13.027655 | orchestrator | 2026-02-04 03:40:52 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-04 03:41:13.027663 | orchestrator | 2026-02-04 03:40:53 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-04 03:41:13.027676 | orchestrator | 2026-02-04 03:40:53 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-04 03:41:13.027685 | orchestrator | 2026-02-04 03:40:53 | INFO  | Importing image Cirros 0.6.3 2026-02-04 03:41:13.027694 | orchestrator | 2026-02-04 03:40:53 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-04 03:41:13.027703 | orchestrator | 2026-02-04 03:40:54 | INFO  | Waiting for image to leave queued state... 2026-02-04 03:41:13.027711 | orchestrator | 2026-02-04 03:40:56 | INFO  | Waiting for import to complete... 2026-02-04 03:41:13.027737 | orchestrator | 2026-02-04 03:41:06 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-04 03:41:13.027748 | orchestrator | 2026-02-04 03:41:07 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-04 03:41:13.027758 | orchestrator | 2026-02-04 03:41:07 | INFO  | Setting internal_version = 0.6.3 2026-02-04 03:41:13.027769 | orchestrator | 2026-02-04 03:41:07 | INFO  | Setting image_original_user = cirros 2026-02-04 03:41:13.027779 | orchestrator | 2026-02-04 03:41:07 | INFO  | Adding tag os:cirros 2026-02-04 03:41:13.027789 | orchestrator | 2026-02-04 03:41:07 | INFO  | Setting property architecture: x86_64 2026-02-04 03:41:13.027799 | orchestrator | 2026-02-04 03:41:07 | INFO  | Setting property hw_disk_bus: scsi 2026-02-04 03:41:13.027809 | orchestrator | 2026-02-04 03:41:08 | INFO  | Setting property hw_rng_model: virtio 2026-02-04 03:41:13.027820 | orchestrator | 2026-02-04 03:41:08 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-04 03:41:13.027830 | orchestrator | 2026-02-04 03:41:08 | INFO  | Setting property hw_watchdog_action: reset 2026-02-04 03:41:13.027840 | orchestrator | 2026-02-04 03:41:08 | INFO  | Setting property hypervisor_type: qemu 2026-02-04 03:41:13.027850 | orchestrator | 2026-02-04 03:41:09 | INFO  | Setting property os_distro: cirros 2026-02-04 03:41:13.027860 | orchestrator | 2026-02-04 03:41:09 | INFO  | Setting property os_purpose: minimal 2026-02-04 03:41:13.027870 | orchestrator | 2026-02-04 03:41:09 | INFO  | Setting property replace_frequency: never 2026-02-04 03:41:13.027881 | orchestrator | 2026-02-04 03:41:09 | INFO  | Setting property uuid_validity: none 2026-02-04 03:41:13.027891 | orchestrator | 2026-02-04 03:41:10 | INFO  | Setting property provided_until: none 2026-02-04 03:41:13.027900 | orchestrator | 2026-02-04 03:41:10 | INFO  | Setting property image_description: Cirros 2026-02-04 03:41:13.027911 | orchestrator | 2026-02-04 03:41:10 | INFO  | Setting property image_name: Cirros 2026-02-04 03:41:13.027921 | orchestrator | 2026-02-04 03:41:10 | INFO  | Setting property internal_version: 0.6.3 2026-02-04 03:41:13.027939 | orchestrator | 2026-02-04 03:41:11 | INFO  | Setting property image_original_user: cirros 2026-02-04 03:41:13.027949 | orchestrator | 2026-02-04 03:41:11 | INFO  | Setting property os_version: 0.6.3 2026-02-04 03:41:13.027959 | orchestrator | 2026-02-04 03:41:11 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-04 03:41:13.027969 | orchestrator | 2026-02-04 03:41:11 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-04 03:41:13.027979 | orchestrator | 2026-02-04 03:41:12 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-04 03:41:13.027989 | orchestrator | 2026-02-04 03:41:12 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-04 03:41:13.028000 | orchestrator | 2026-02-04 03:41:12 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-04 03:41:13.361271 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-04 03:41:15.715229 | orchestrator | 2026-02-04 03:41:15 | INFO  | date: 2026-02-04 2026-02-04 03:41:15.715333 | orchestrator | 2026-02-04 03:41:15 | INFO  | image: octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-04 03:41:15.715371 | orchestrator | 2026-02-04 03:41:15 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-04 03:41:15.715387 | orchestrator | 2026-02-04 03:41:15 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2.CHECKSUM 2026-02-04 03:41:15.871026 | orchestrator | 2026-02-04 03:41:15 | INFO  | checksum: fa81774e60e440b52eb763bc24f9302dc0d7fa56080593c2ba4182f5e23fdc54 2026-02-04 03:41:15.942544 | orchestrator | 2026-02-04 03:41:15 | INFO  | It takes a moment until task b77e036d-a6fd-457d-bf4e-23e6623322c1 (image-manager) has been started and output is visible here. 2026-02-04 03:42:49.034723 | orchestrator | 2026-02-04 03:41:18 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-04' 2026-02-04 03:42:49.034842 | orchestrator | 2026-02-04 03:41:18 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2: 200 2026-02-04 03:42:49.034859 | orchestrator | 2026-02-04 03:41:18 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-04 2026-02-04 03:42:49.034871 | orchestrator | 2026-02-04 03:41:18 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-04 03:42:49.034883 | orchestrator | 2026-02-04 03:41:20 | INFO  | Waiting for image to leave queued state... 2026-02-04 03:42:49.034894 | orchestrator | 2026-02-04 03:41:22 | INFO  | Waiting for import to complete... 2026-02-04 03:42:49.034906 | orchestrator | 2026-02-04 03:41:32 | INFO  | Waiting for import to complete... 2026-02-04 03:42:49.034916 | orchestrator | 2026-02-04 03:41:42 | INFO  | Waiting for import to complete... 2026-02-04 03:42:49.034927 | orchestrator | 2026-02-04 03:41:52 | INFO  | Waiting for import to complete... 2026-02-04 03:42:49.034941 | orchestrator | 2026-02-04 03:42:02 | INFO  | Waiting for import to complete... 2026-02-04 03:42:49.034952 | orchestrator | 2026-02-04 03:42:12 | INFO  | Waiting for import to complete... 2026-02-04 03:42:49.034963 | orchestrator | 2026-02-04 03:42:22 | INFO  | Waiting for import to complete... 2026-02-04 03:42:49.034974 | orchestrator | 2026-02-04 03:42:32 | INFO  | Waiting for import to complete... 2026-02-04 03:42:49.034985 | orchestrator | 2026-02-04 03:42:42 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-04' successfully completed, reloading images 2026-02-04 03:42:49.035022 | orchestrator | 2026-02-04 03:42:43 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-04' 2026-02-04 03:42:49.035033 | orchestrator | 2026-02-04 03:42:43 | INFO  | Setting internal_version = 2026-02-04 2026-02-04 03:42:49.035044 | orchestrator | 2026-02-04 03:42:43 | INFO  | Setting image_original_user = ubuntu 2026-02-04 03:42:49.035055 | orchestrator | 2026-02-04 03:42:43 | INFO  | Adding tag amphora 2026-02-04 03:42:49.035066 | orchestrator | 2026-02-04 03:42:43 | INFO  | Adding tag os:ubuntu 2026-02-04 03:42:49.035077 | orchestrator | 2026-02-04 03:42:43 | INFO  | Setting property architecture: x86_64 2026-02-04 03:42:49.035087 | orchestrator | 2026-02-04 03:42:44 | INFO  | Setting property hw_disk_bus: scsi 2026-02-04 03:42:49.035098 | orchestrator | 2026-02-04 03:42:44 | INFO  | Setting property hw_rng_model: virtio 2026-02-04 03:42:49.035109 | orchestrator | 2026-02-04 03:42:44 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-04 03:42:49.035119 | orchestrator | 2026-02-04 03:42:45 | INFO  | Setting property hw_watchdog_action: reset 2026-02-04 03:42:49.035130 | orchestrator | 2026-02-04 03:42:45 | INFO  | Setting property hypervisor_type: qemu 2026-02-04 03:42:49.035140 | orchestrator | 2026-02-04 03:42:45 | INFO  | Setting property os_distro: ubuntu 2026-02-04 03:42:49.035151 | orchestrator | 2026-02-04 03:42:45 | INFO  | Setting property replace_frequency: quarterly 2026-02-04 03:42:49.035161 | orchestrator | 2026-02-04 03:42:46 | INFO  | Setting property uuid_validity: last-1 2026-02-04 03:42:49.035172 | orchestrator | 2026-02-04 03:42:46 | INFO  | Setting property provided_until: none 2026-02-04 03:42:49.035199 | orchestrator | 2026-02-04 03:42:46 | INFO  | Setting property os_purpose: network 2026-02-04 03:42:49.035210 | orchestrator | 2026-02-04 03:42:46 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-04 03:42:49.035250 | orchestrator | 2026-02-04 03:42:47 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-04 03:42:49.035263 | orchestrator | 2026-02-04 03:42:47 | INFO  | Setting property internal_version: 2026-02-04 2026-02-04 03:42:49.035274 | orchestrator | 2026-02-04 03:42:47 | INFO  | Setting property image_original_user: ubuntu 2026-02-04 03:42:49.035285 | orchestrator | 2026-02-04 03:42:47 | INFO  | Setting property os_version: 2026-02-04 2026-02-04 03:42:49.035295 | orchestrator | 2026-02-04 03:42:48 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260204.qcow2 2026-02-04 03:42:49.035306 | orchestrator | 2026-02-04 03:42:48 | INFO  | Setting property image_build_date: 2026-02-04 2026-02-04 03:42:49.035334 | orchestrator | 2026-02-04 03:42:48 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-04' 2026-02-04 03:42:49.035346 | orchestrator | 2026-02-04 03:42:48 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-04' 2026-02-04 03:42:49.035357 | orchestrator | 2026-02-04 03:42:48 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-04 03:42:49.035367 | orchestrator | 2026-02-04 03:42:48 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-04 03:42:49.035379 | orchestrator | 2026-02-04 03:42:48 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-04 03:42:49.035390 | orchestrator | 2026-02-04 03:42:48 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-04 03:42:49.546095 | orchestrator | ok: Runtime: 0:03:27.181160 2026-02-04 03:42:49.565230 | 2026-02-04 03:42:49.565386 | TASK [Run checks] 2026-02-04 03:42:50.322880 | orchestrator | + set -e 2026-02-04 03:42:50.323074 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 03:42:50.323095 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 03:42:50.323116 | orchestrator | ++ INTERACTIVE=false 2026-02-04 03:42:50.323128 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 03:42:50.323140 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 03:42:50.323153 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-04 03:42:50.323795 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-04 03:42:50.331111 | orchestrator | 2026-02-04 03:42:50.331202 | orchestrator | # CHECK 2026-02-04 03:42:50.331217 | orchestrator | 2026-02-04 03:42:50.331260 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 03:42:50.331290 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 03:42:50.331309 | orchestrator | + echo 2026-02-04 03:42:50.331323 | orchestrator | + echo '# CHECK' 2026-02-04 03:42:50.331334 | orchestrator | + echo 2026-02-04 03:42:50.331350 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-04 03:42:50.332068 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-04 03:42:50.394133 | orchestrator | 2026-02-04 03:42:50.394283 | orchestrator | ## Containers @ testbed-manager 2026-02-04 03:42:50.394307 | orchestrator | 2026-02-04 03:42:50.394331 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-04 03:42:50.394351 | orchestrator | + echo 2026-02-04 03:42:50.394364 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-04 03:42:50.394376 | orchestrator | + echo 2026-02-04 03:42:50.394388 | orchestrator | + osism container testbed-manager ps 2026-02-04 03:42:52.370597 | orchestrator | 2026-02-04 03:42:52 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-04 03:42:52.780966 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-04 03:42:52.781121 | orchestrator | c68d001c3caf registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-04 03:42:52.781149 | orchestrator | baa73e7b3278 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-04 03:42:52.781162 | orchestrator | 39e4f8153283 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-04 03:42:52.781174 | orchestrator | 067521d3d149 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-04 03:42:52.781186 | orchestrator | e81493b245a9 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-04 03:42:52.781203 | orchestrator | f719b9b68b6a registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 58 minutes ago Up 57 minutes cephclient 2026-02-04 03:42:52.781215 | orchestrator | 6338af6e0542 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-04 03:42:52.781269 | orchestrator | 489725aa4fab registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-04 03:42:52.781311 | orchestrator | 4bae86bda040 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-04 03:42:52.781333 | orchestrator | e4ba825ff98c registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-04 03:42:52.781353 | orchestrator | e1b2d246ce01 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-04 03:42:52.781372 | orchestrator | 364592e7e392 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-04 03:42:52.781392 | orchestrator | 351fe213a45b registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-04 03:42:52.781412 | orchestrator | 2c5d363e927f registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-04 03:42:52.781459 | orchestrator | 7de89ced61e2 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-04 03:42:52.781489 | orchestrator | a0af2009bbc1 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-04 03:42:52.781502 | orchestrator | 6bd50d63ebc1 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-04 03:42:52.781513 | orchestrator | 9c8b64f1a94a registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-04 03:42:52.781525 | orchestrator | 025dc95ff07b registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-04 03:42:52.781536 | orchestrator | 2cfb494366ea registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-04 03:42:52.781548 | orchestrator | 8c167f89b69d registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-04 03:42:52.781559 | orchestrator | 9c1f320ced73 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-04 03:42:52.781581 | orchestrator | 16e728375fcf registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-04 03:42:52.782961 | orchestrator | ceab8f5fbf90 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-04 03:42:52.782995 | orchestrator | 17c7f9cbb1a5 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-04 03:42:52.783008 | orchestrator | 6d1107847d50 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-04 03:42:52.783019 | orchestrator | 6841b89db55b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-04 03:42:52.783031 | orchestrator | 7c4f438f56a1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-04 03:42:52.783042 | orchestrator | 82c7d36ce68a registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-04 03:42:52.783065 | orchestrator | 76beb9defee3 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-04 03:42:53.127581 | orchestrator | 2026-02-04 03:42:53.127694 | orchestrator | ## Images @ testbed-manager 2026-02-04 03:42:53.127712 | orchestrator | 2026-02-04 03:42:53.127725 | orchestrator | + echo 2026-02-04 03:42:53.127738 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-04 03:42:53.127750 | orchestrator | + echo 2026-02-04 03:42:53.127767 | orchestrator | + osism container testbed-manager images 2026-02-04 03:42:55.519664 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-04 03:42:55.519781 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 4c55021ebaa3 24 hours ago 238MB 2026-02-04 03:42:55.519798 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 7 days ago 41.4MB 2026-02-04 03:42:55.519809 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-04 03:42:55.519820 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-04 03:42:55.519831 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-04 03:42:55.519842 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-04 03:42:55.519853 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-04 03:42:55.519866 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-04 03:42:55.519877 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-04 03:42:55.519917 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-04 03:42:55.519929 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-04 03:42:55.519940 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-04 03:42:55.519951 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-04 03:42:55.519962 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-04 03:42:55.519972 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-04 03:42:55.519983 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-04 03:42:55.519994 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-04 03:42:55.520005 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-04 03:42:55.520016 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 2 months ago 334MB 2026-02-04 03:42:55.520027 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 3 months ago 742MB 2026-02-04 03:42:55.520038 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-04 03:42:55.520048 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-04 03:42:55.520059 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-04 03:42:55.520070 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-04 03:42:55.520080 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-04 03:42:55.839132 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-04 03:42:55.839715 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-04 03:42:55.880815 | orchestrator | 2026-02-04 03:42:55.880920 | orchestrator | ## Containers @ testbed-node-0 2026-02-04 03:42:55.880938 | orchestrator | 2026-02-04 03:42:55.880950 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-04 03:42:55.880961 | orchestrator | + echo 2026-02-04 03:42:55.880973 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-04 03:42:55.880985 | orchestrator | + echo 2026-02-04 03:42:55.880996 | orchestrator | + osism container testbed-node-0 ps 2026-02-04 03:42:58.333554 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-04 03:42:58.333661 | orchestrator | 4bcb57ab242e registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-04 03:42:58.333701 | orchestrator | 1717a6a11228 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-04 03:42:58.333714 | orchestrator | 9c349de37ecf registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-02-04 03:42:58.333725 | orchestrator | a36366774347 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-04 03:42:58.333760 | orchestrator | a71d7c75b32e registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-04 03:42:58.333778 | orchestrator | 8b24312d6ce8 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-04 03:42:58.333810 | orchestrator | d71113d7be0a registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-04 03:42:58.333835 | orchestrator | e0525a19b1a7 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-04 03:42:58.333853 | orchestrator | 5154e8cac7b1 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-04 03:42:58.333872 | orchestrator | 2f0a6aa4e850 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-04 03:42:58.333890 | orchestrator | 1c6a7cb7a4e7 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-04 03:42:58.333905 | orchestrator | fe3c89716540 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-04 03:42:58.333923 | orchestrator | d93ad576f97b registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-04 03:42:58.333942 | orchestrator | 3fbb3bfbe5bb registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-04 03:42:58.333960 | orchestrator | 012796d793c9 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-04 03:42:58.333979 | orchestrator | 60ab102f0eda registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-04 03:42:58.334000 | orchestrator | f3b71538fa31 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-04 03:42:58.334071 | orchestrator | 6e0f1a5f6839 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-04 03:42:58.334086 | orchestrator | 2054e6c8da6d registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-04 03:42:58.334127 | orchestrator | 4751e9849707 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-04 03:42:58.334140 | orchestrator | a0f9761d7ea9 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-04 03:42:58.334151 | orchestrator | e2173e1755a8 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-04 03:42:58.334172 | orchestrator | 4c3c81fa555f registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-04 03:42:58.334183 | orchestrator | 0cad61d28756 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-04 03:42:58.334194 | orchestrator | 1d373166b0f4 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-04 03:42:58.334210 | orchestrator | 0288fb03478c registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-04 03:42:58.334222 | orchestrator | b201bf66e77a registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-04 03:42:58.334265 | orchestrator | 43858195e0da registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-04 03:42:58.334281 | orchestrator | 421b0b9ebdd0 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-04 03:42:58.334293 | orchestrator | 19ea85ffce49 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-04 03:42:58.334304 | orchestrator | b538e2e4f0f1 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-04 03:42:58.334316 | orchestrator | 7886b2d4e322 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-04 03:42:58.334327 | orchestrator | 39c7f7f530a3 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-04 03:42:58.334339 | orchestrator | c9f3ace6b317 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-04 03:42:58.334350 | orchestrator | 48477c51e462 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-04 03:42:58.334361 | orchestrator | 1ba0c2be59f1 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-04 03:42:58.334372 | orchestrator | 6ca5f1005e33 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-04 03:42:58.334384 | orchestrator | e218198022e4 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-04 03:42:58.334395 | orchestrator | 4c8852db30c2 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-04 03:42:58.334415 | orchestrator | e9ea54fd4282 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-04 03:42:58.334435 | orchestrator | edef612f523d registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-04 03:42:58.334447 | orchestrator | 39776670bf8c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-04 03:42:58.334464 | orchestrator | aa05ed76d20b registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-04 03:42:58.334475 | orchestrator | ac9c0f698050 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-04 03:42:58.334486 | orchestrator | 11f4629d1dbb registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-04 03:42:58.334497 | orchestrator | 108d19060935 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-04 03:42:58.334508 | orchestrator | 565c7c168e54 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-04 03:42:58.334519 | orchestrator | 7d254d0f16b5 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-04 03:42:58.334530 | orchestrator | 7a003b1b2816 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone_ssh 2026-02-04 03:42:58.334541 | orchestrator | 44a2a7be2ea8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-0 2026-02-04 03:42:58.334552 | orchestrator | 221a6162bf35 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-04 03:42:58.334563 | orchestrator | d8f725914c3c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-04 03:42:58.334575 | orchestrator | afbf235bcb11 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-04 03:42:58.334585 | orchestrator | 8bb5131685f3 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-04 03:42:58.334596 | orchestrator | 6e6f607e5b09 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-04 03:42:58.334607 | orchestrator | abe0451accac registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-04 03:42:58.334624 | orchestrator | 02abbdc2de9f registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-04 03:42:58.334635 | orchestrator | c1fad0e9a725 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-04 03:42:58.334690 | orchestrator | 0278457ae81e registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-04 03:42:58.334708 | orchestrator | f8cad0d3e2a4 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-04 03:42:58.334719 | orchestrator | d4d5a8c49f61 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-04 03:42:58.334730 | orchestrator | 8e484fd75b2e registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-04 03:42:58.334741 | orchestrator | 11cce0f8edf8 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-04 03:42:58.334753 | orchestrator | 05fa5bc1b00d registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-04 03:42:58.334763 | orchestrator | 6df35dd9a9c1 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-04 03:42:58.334774 | orchestrator | 6ffac1fb148b registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-04 03:42:58.334785 | orchestrator | 6bd7f90ab2f8 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-04 03:42:58.334797 | orchestrator | d2619b7490da registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-04 03:42:58.334808 | orchestrator | 16ec32070e9f registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-04 03:42:58.334819 | orchestrator | 48fbbb908313 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-04 03:42:58.334830 | orchestrator | 582cbd660477 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-04 03:42:58.636890 | orchestrator | 2026-02-04 03:42:58.637025 | orchestrator | ## Images @ testbed-node-0 2026-02-04 03:42:58.637045 | orchestrator | 2026-02-04 03:42:58.637058 | orchestrator | + echo 2026-02-04 03:42:58.637070 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-04 03:42:58.637082 | orchestrator | + echo 2026-02-04 03:42:58.637094 | orchestrator | + osism container testbed-node-0 images 2026-02-04 03:43:01.143757 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-04 03:43:01.143914 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-04 03:43:01.143943 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-04 03:43:01.143963 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-04 03:43:01.143977 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-04 03:43:01.144011 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-04 03:43:01.144022 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-04 03:43:01.144033 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-04 03:43:01.144044 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-04 03:43:01.144055 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-04 03:43:01.144066 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-04 03:43:01.144077 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-04 03:43:01.144087 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-04 03:43:01.144098 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-04 03:43:01.144109 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-04 03:43:01.144120 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-04 03:43:01.144130 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-04 03:43:01.144141 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-04 03:43:01.144152 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-04 03:43:01.144162 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-04 03:43:01.144173 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-04 03:43:01.144184 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-04 03:43:01.144194 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-04 03:43:01.144205 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-04 03:43:01.144216 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-04 03:43:01.144266 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-04 03:43:01.144278 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-04 03:43:01.144289 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-04 03:43:01.144306 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-04 03:43:01.144317 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-04 03:43:01.144328 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-04 03:43:01.146114 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-04 03:43:01.146142 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-04 03:43:01.146154 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-04 03:43:01.146164 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-04 03:43:01.146175 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-04 03:43:01.146186 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-04 03:43:01.146197 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-04 03:43:01.146208 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-04 03:43:01.146219 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-04 03:43:01.146261 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-04 03:43:01.146273 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-04 03:43:01.146283 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-04 03:43:01.146294 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-04 03:43:01.146305 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-04 03:43:01.146316 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-04 03:43:01.146327 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-04 03:43:01.146555 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-04 03:43:01.146578 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-04 03:43:01.146589 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-04 03:43:01.146600 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-04 03:43:01.146613 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-04 03:43:01.146633 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-04 03:43:01.146653 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-04 03:43:01.146671 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-04 03:43:01.146780 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-04 03:43:01.146799 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-04 03:43:01.146823 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-04 03:43:01.146834 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-04 03:43:01.146854 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-04 03:43:01.146865 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-04 03:43:01.146876 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-04 03:43:01.146887 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-04 03:43:01.146898 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-04 03:43:01.146908 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-04 03:43:01.146919 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-04 03:43:01.146930 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-04 03:43:01.146941 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-04 03:43:01.146952 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-04 03:43:01.146963 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-04 03:43:01.491092 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-04 03:43:01.491186 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-04 03:43:01.559936 | orchestrator | 2026-02-04 03:43:01.560016 | orchestrator | ## Containers @ testbed-node-1 2026-02-04 03:43:01.560026 | orchestrator | 2026-02-04 03:43:01.560031 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-04 03:43:01.560036 | orchestrator | + echo 2026-02-04 03:43:01.560041 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-04 03:43:01.560047 | orchestrator | + echo 2026-02-04 03:43:01.560052 | orchestrator | + osism container testbed-node-1 ps 2026-02-04 03:43:03.987490 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-04 03:43:03.987671 | orchestrator | f7500ffc9406 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-04 03:43:03.987695 | orchestrator | 2bfad28bc34e registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-04 03:43:03.987713 | orchestrator | ac35574534e8 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-04 03:43:03.987732 | orchestrator | a94fa9801e50 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-02-04 03:43:03.987752 | orchestrator | e248bb4c550f registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-04 03:43:03.987770 | orchestrator | 8ddc736c36e1 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-04 03:43:03.987822 | orchestrator | 83579f2d409b registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-04 03:43:03.987840 | orchestrator | 24d2997271e2 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-04 03:43:03.987859 | orchestrator | c0669a9a8b4e registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-04 03:43:03.987877 | orchestrator | 96a8915b4c94 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-04 03:43:03.987895 | orchestrator | 38e2e99e05a1 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-04 03:43:03.987913 | orchestrator | 81168206eac2 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-04 03:43:03.987950 | orchestrator | 3ba9e39d1558 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-04 03:43:03.987969 | orchestrator | dc1fe0db1d6a registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-04 03:43:03.987986 | orchestrator | 020456706eb7 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-04 03:43:03.988003 | orchestrator | 7dbcbcafe31a registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-04 03:43:03.988022 | orchestrator | d7dcc804807a registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-04 03:43:03.988042 | orchestrator | 3e3806523fac registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-04 03:43:03.988297 | orchestrator | 1b734d3ba58b registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-04 03:43:03.988323 | orchestrator | 33dc4d8ae3df registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-04 03:43:03.988342 | orchestrator | d787dc3a269f registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-04 03:43:03.988361 | orchestrator | 7540f534516a registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-04 03:43:03.988380 | orchestrator | 0bd6a3a08a27 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-04 03:43:03.988398 | orchestrator | 3c299b01fa75 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-04 03:43:03.988428 | orchestrator | 4dbea131c6e3 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-04 03:43:03.988445 | orchestrator | c53d6776ef7e registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-04 03:43:03.988537 | orchestrator | 26bf516aeed8 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-04 03:43:03.988552 | orchestrator | 57a21e04a29e registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-04 03:43:03.988566 | orchestrator | 7ad0c944630f registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-04 03:43:03.988580 | orchestrator | 11d6aba0838a registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-04 03:43:03.988594 | orchestrator | 324f3403e9b7 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-04 03:43:03.988608 | orchestrator | e4047e066f47 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-04 03:43:03.988621 | orchestrator | 3bbdacc67a78 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-04 03:43:03.988635 | orchestrator | dd3d4ebcde8a registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-04 03:43:03.988649 | orchestrator | 839e17e61e7f registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-04 03:43:03.988663 | orchestrator | 8dfc78295971 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-04 03:43:03.988685 | orchestrator | a913239db0f6 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-04 03:43:03.988699 | orchestrator | afe4575efeb5 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-04 03:43:03.988800 | orchestrator | 9fd67374a567 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-04 03:43:03.988816 | orchestrator | e50158e616ae registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-04 03:43:03.988830 | orchestrator | 9e81cea7278e registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-04 03:43:03.988885 | orchestrator | f2c739bab02d registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-04 03:43:03.988900 | orchestrator | fed4f5548172 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-04 03:43:03.988914 | orchestrator | 93d24aa51e14 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-04 03:43:03.988928 | orchestrator | 77bea887c348 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-04 03:43:03.988942 | orchestrator | ca6993c6d63c registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-04 03:43:03.988956 | orchestrator | d7727a5a4c76 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-04 03:43:03.988969 | orchestrator | e5514ea5a92a registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-04 03:43:03.988984 | orchestrator | 21910cca130d registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-04 03:43:03.988998 | orchestrator | 4b7494561d11 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-1 2026-02-04 03:43:03.989110 | orchestrator | af24200a0a91 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-04 03:43:03.989126 | orchestrator | e8207b686900 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-04 03:43:03.989141 | orchestrator | 5d36c83c0eba registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-04 03:43:03.989155 | orchestrator | 2161ca6f733a registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-04 03:43:03.989169 | orchestrator | 49ccbb4477c9 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-04 03:43:03.989183 | orchestrator | 4f2f035a9f84 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-04 03:43:03.989197 | orchestrator | ac11ab6b5e38 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-04 03:43:03.989212 | orchestrator | 1a6342d8fcb1 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-04 03:43:03.989226 | orchestrator | 1c9958090e59 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-04 03:43:03.989272 | orchestrator | bac89ee85f6d registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-04 03:43:03.989286 | orchestrator | 78b863269ed9 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-04 03:43:03.989300 | orchestrator | 2fe148297562 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-04 03:43:03.989314 | orchestrator | 487b3e77152c registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-04 03:43:03.989327 | orchestrator | ed55adf6a928 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-04 03:43:03.989599 | orchestrator | c741faaf7784 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-04 03:43:03.989621 | orchestrator | fed44db75c00 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-04 03:43:03.989635 | orchestrator | ab7daa0cc3f3 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-04 03:43:03.989649 | orchestrator | e0be3bf4dc4d registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-04 03:43:03.989663 | orchestrator | 61089fecb2dc registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-04 03:43:03.989682 | orchestrator | 8539e7001840 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-04 03:43:03.989696 | orchestrator | 6c42c0633ff9 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-04 03:43:04.320821 | orchestrator | 2026-02-04 03:43:04.320940 | orchestrator | ## Images @ testbed-node-1 2026-02-04 03:43:04.320962 | orchestrator | 2026-02-04 03:43:04.320981 | orchestrator | + echo 2026-02-04 03:43:04.320998 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-04 03:43:04.321018 | orchestrator | + echo 2026-02-04 03:43:04.321038 | orchestrator | + osism container testbed-node-1 images 2026-02-04 03:43:06.733690 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-04 03:43:06.733800 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-04 03:43:06.733817 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-04 03:43:06.733829 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-04 03:43:06.733841 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-04 03:43:06.733852 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-04 03:43:06.733863 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-04 03:43:06.733897 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-04 03:43:06.733910 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-04 03:43:06.733920 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-04 03:43:06.733931 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-04 03:43:06.733942 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-04 03:43:06.733953 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-04 03:43:06.733963 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-04 03:43:06.733974 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-04 03:43:06.733984 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-04 03:43:06.733995 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-04 03:43:06.734005 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-04 03:43:06.734102 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-04 03:43:06.734117 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-04 03:43:06.734129 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-04 03:43:06.734140 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-04 03:43:06.734151 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-04 03:43:06.734161 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-04 03:43:06.734172 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-04 03:43:06.734183 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-04 03:43:06.734194 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-04 03:43:06.734204 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-04 03:43:06.734215 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-04 03:43:06.734226 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-04 03:43:06.734299 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-04 03:43:06.734311 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-04 03:43:06.734341 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-04 03:43:06.734364 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-04 03:43:06.734376 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-04 03:43:06.734387 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-04 03:43:06.734398 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-04 03:43:06.734408 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-04 03:43:06.734436 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-04 03:43:06.734447 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-04 03:43:06.734458 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-04 03:43:06.734469 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-04 03:43:06.734480 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-04 03:43:06.734491 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-04 03:43:06.734502 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-04 03:43:06.734512 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-04 03:43:06.734523 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-04 03:43:06.734534 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-04 03:43:06.734545 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-04 03:43:06.734556 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-04 03:43:06.734567 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-04 03:43:06.734578 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-04 03:43:06.734588 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-04 03:43:06.734599 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-04 03:43:06.734610 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-04 03:43:06.734620 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-04 03:43:06.734632 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-04 03:43:06.734642 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-04 03:43:06.734653 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-04 03:43:06.734664 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-04 03:43:06.734682 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-04 03:43:06.734693 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-04 03:43:06.734704 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-04 03:43:06.734714 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-04 03:43:06.734733 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-04 03:43:06.734745 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-04 03:43:06.734756 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-04 03:43:06.734767 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-04 03:43:06.734777 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-04 03:43:06.734788 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-04 03:43:07.060774 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-04 03:43:07.061317 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-04 03:43:07.121515 | orchestrator | 2026-02-04 03:43:07.121601 | orchestrator | ## Containers @ testbed-node-2 2026-02-04 03:43:07.121615 | orchestrator | 2026-02-04 03:43:07.121627 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-04 03:43:07.121638 | orchestrator | + echo 2026-02-04 03:43:07.121650 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-04 03:43:07.121662 | orchestrator | + echo 2026-02-04 03:43:07.121673 | orchestrator | + osism container testbed-node-2 ps 2026-02-04 03:43:09.616908 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-04 03:43:09.617008 | orchestrator | ec4ea03b14df registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-04 03:43:09.617025 | orchestrator | 68253255a9b6 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-04 03:43:09.617037 | orchestrator | a57293b95ab4 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-04 03:43:09.617048 | orchestrator | db76561c8cae registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_elasticsearch_exporter 2026-02-04 03:43:09.617061 | orchestrator | fad126d04381 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-04 03:43:09.617073 | orchestrator | 87e24c832593 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-04 03:43:09.617085 | orchestrator | 224b0cefa0e7 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-04 03:43:09.617097 | orchestrator | fc5a309a5329 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-04 03:43:09.617132 | orchestrator | f0c1b990fc31 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_share 2026-02-04 03:43:09.617144 | orchestrator | 17e87b93eebc registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-04 03:43:09.617155 | orchestrator | 14c832de0509 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-04 03:43:09.617166 | orchestrator | 5735c22ed7ae registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-04 03:43:09.617207 | orchestrator | 3e535a496d3c registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-04 03:43:09.617219 | orchestrator | 7d48d3a2e75e registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-04 03:43:09.617255 | orchestrator | 113778a4fafc registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-04 03:43:09.617267 | orchestrator | 2e6743992ccd registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-04 03:43:09.617278 | orchestrator | 47d48b46e37e registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-04 03:43:09.617289 | orchestrator | 20bc9338c625 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-04 03:43:09.617300 | orchestrator | 64724e50bd96 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_worker 2026-02-04 03:43:09.617328 | orchestrator | 5c5d3e5c87b5 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-04 03:43:09.617340 | orchestrator | bf3d6daa95fd registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-04 03:43:09.617351 | orchestrator | e83d9f8b2ff4 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-04 03:43:09.617362 | orchestrator | 26a39334d851 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-04 03:43:09.617373 | orchestrator | 48649a797da5 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-04 03:43:09.617384 | orchestrator | 585ce2f268e5 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-04 03:43:09.617402 | orchestrator | e153b6073549 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-04 03:43:09.617413 | orchestrator | 4181937a0164 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-04 03:43:09.617424 | orchestrator | cddcc65bc0d4 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-04 03:43:09.617435 | orchestrator | 47b8a5f98ff0 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-04 03:43:09.617447 | orchestrator | 7101dececcb1 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-04 03:43:09.617461 | orchestrator | 75fe1ad7e3bd registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-04 03:43:09.617474 | orchestrator | 73758577081a registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-04 03:43:09.617488 | orchestrator | 4f4db592f6a7 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-04 03:43:09.617554 | orchestrator | 604e78bc3fe1 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-04 03:43:09.617568 | orchestrator | a1e8b8925ec3 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-04 03:43:09.617581 | orchestrator | 810bb3043e4d registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-04 03:43:09.617595 | orchestrator | 54660bcbaa30 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-04 03:43:09.617609 | orchestrator | f0036081a196 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-04 03:43:09.617622 | orchestrator | faab21544b66 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-04 03:43:09.617643 | orchestrator | 0cd7169abdbe registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-04 03:43:09.617658 | orchestrator | 714884391dc5 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-04 03:43:09.617671 | orchestrator | 76dc0cff8eb8 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-04 03:43:09.617684 | orchestrator | df6e3cc88a1e registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-04 03:43:09.617706 | orchestrator | 3980e880122c registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-04 03:43:09.617719 | orchestrator | 90c3885f3177 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-04 03:43:09.617733 | orchestrator | 9c76c3ceae07 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-04 03:43:09.617746 | orchestrator | 2aae952970f2 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-04 03:43:09.617759 | orchestrator | 0c9724a680fb registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-04 03:43:09.617773 | orchestrator | afcaa81960b8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-04 03:43:09.617787 | orchestrator | 5772873d8069 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-2 2026-02-04 03:43:09.617801 | orchestrator | 3f0e10252fb4 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-04 03:43:09.617819 | orchestrator | c48be97cec44 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-04 03:43:09.617830 | orchestrator | 839689dbb1fd registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-04 03:43:09.617845 | orchestrator | 4b320a49ac7f registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-04 03:43:09.617945 | orchestrator | f00cbb511436 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-04 03:43:09.617961 | orchestrator | 9c15672cdb2f registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-04 03:43:09.617972 | orchestrator | 965d5994b4d3 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-04 03:43:09.617983 | orchestrator | 8d816397f757 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-04 03:43:09.617994 | orchestrator | f4a8128a039f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-04 03:43:09.618005 | orchestrator | 92155ac319e1 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-04 03:43:09.618080 | orchestrator | 27aa4dd2d3c2 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-04 03:43:09.618101 | orchestrator | a3b02fdafc37 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-04 03:43:09.618113 | orchestrator | 59fb3260cf1b registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-04 03:43:09.618124 | orchestrator | b3fe034bb8b2 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-04 03:43:09.618135 | orchestrator | 2cb8eb586d08 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-04 03:43:09.618147 | orchestrator | 31693e55002c registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-04 03:43:09.618158 | orchestrator | a0c44b13063a registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-04 03:43:09.618169 | orchestrator | a1f8d3008495 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-04 03:43:09.618180 | orchestrator | 99b504b0e077 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-04 03:43:09.618191 | orchestrator | 671493e8a19f registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-04 03:43:09.618202 | orchestrator | dd4538a63439 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-04 03:43:09.942288 | orchestrator | 2026-02-04 03:43:09.942396 | orchestrator | ## Images @ testbed-node-2 2026-02-04 03:43:09.942415 | orchestrator | 2026-02-04 03:43:09.942428 | orchestrator | + echo 2026-02-04 03:43:09.942441 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-04 03:43:09.942453 | orchestrator | + echo 2026-02-04 03:43:09.942464 | orchestrator | + osism container testbed-node-2 images 2026-02-04 03:43:12.388175 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-04 03:43:12.388344 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-04 03:43:12.388359 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-04 03:43:12.388366 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-04 03:43:12.388387 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-04 03:43:12.388394 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-04 03:43:12.388400 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-04 03:43:12.388406 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-04 03:43:12.388413 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-04 03:43:12.388436 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-04 03:43:12.388447 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-04 03:43:12.388462 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-04 03:43:12.388472 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-04 03:43:12.388482 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-04 03:43:12.388492 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-04 03:43:12.388501 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-04 03:43:12.388510 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-04 03:43:12.388520 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-04 03:43:12.388529 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-04 03:43:12.388538 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-04 03:43:12.388548 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-04 03:43:12.388558 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-04 03:43:12.388569 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-04 03:43:12.388580 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-04 03:43:12.388591 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-04 03:43:12.388597 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-04 03:43:12.388603 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-04 03:43:12.388610 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-04 03:43:12.388616 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-04 03:43:12.388622 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-04 03:43:12.388628 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-04 03:43:12.388635 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-04 03:43:12.388657 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-04 03:43:12.388663 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-04 03:43:12.388669 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-04 03:43:12.388676 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-04 03:43:12.388689 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-04 03:43:12.388695 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-04 03:43:12.388702 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-04 03:43:12.388715 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-04 03:43:12.388722 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-04 03:43:12.388730 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-04 03:43:12.388737 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-04 03:43:12.388744 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-04 03:43:12.388751 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-04 03:43:12.388758 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-04 03:43:12.388766 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-04 03:43:12.388773 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-04 03:43:12.388780 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-04 03:43:12.388788 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-04 03:43:12.388795 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-04 03:43:12.388803 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-04 03:43:12.388810 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-04 03:43:12.388817 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-04 03:43:12.388825 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-04 03:43:12.388833 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-04 03:43:12.388843 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-04 03:43:12.388854 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-04 03:43:12.388864 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-04 03:43:12.388873 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-04 03:43:12.388883 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-04 03:43:12.388893 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-04 03:43:12.388910 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-04 03:43:12.388920 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-04 03:43:12.388932 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-04 03:43:12.388939 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-04 03:43:12.388946 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-04 03:43:12.388952 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-04 03:43:12.388962 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-04 03:43:12.388969 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-04 03:43:12.749472 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-04 03:43:12.757297 | orchestrator | + set -e 2026-02-04 03:43:12.757373 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 03:43:12.757388 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 03:43:12.757400 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 03:43:12.757411 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 03:43:12.757422 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 03:43:12.757433 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 03:43:12.757445 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 03:43:12.757456 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 03:43:12.757467 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 03:43:12.757478 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 03:43:12.757489 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 03:43:12.757499 | orchestrator | ++ export ARA=false 2026-02-04 03:43:12.757511 | orchestrator | ++ ARA=false 2026-02-04 03:43:12.757522 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 03:43:12.757533 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 03:43:12.757543 | orchestrator | ++ export TEMPEST=false 2026-02-04 03:43:12.757554 | orchestrator | ++ TEMPEST=false 2026-02-04 03:43:12.757565 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 03:43:12.757602 | orchestrator | ++ IS_ZUUL=true 2026-02-04 03:43:12.757613 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:43:12.757625 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:43:12.757636 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 03:43:12.757646 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 03:43:12.757657 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 03:43:12.757667 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 03:43:12.757679 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 03:43:12.757690 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 03:43:12.757700 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 03:43:12.757711 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 03:43:12.757722 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 03:43:12.757733 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-04 03:43:12.762962 | orchestrator | + set -e 2026-02-04 03:43:12.762994 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 03:43:12.763006 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 03:43:12.763018 | orchestrator | ++ INTERACTIVE=false 2026-02-04 03:43:12.763029 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 03:43:12.763039 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 03:43:12.763050 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-04 03:43:12.763656 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-04 03:43:12.767142 | orchestrator | 2026-02-04 03:43:12.767228 | orchestrator | # Ceph status 2026-02-04 03:43:12.767305 | orchestrator | 2026-02-04 03:43:12.767318 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 03:43:12.767330 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 03:43:12.767342 | orchestrator | + echo 2026-02-04 03:43:12.767353 | orchestrator | + echo '# Ceph status' 2026-02-04 03:43:12.767390 | orchestrator | + echo 2026-02-04 03:43:12.767401 | orchestrator | + ceph -s 2026-02-04 03:43:13.400012 | orchestrator | cluster: 2026-02-04 03:43:13.400115 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-04 03:43:13.400133 | orchestrator | health: HEALTH_OK 2026-02-04 03:43:13.400146 | orchestrator | 2026-02-04 03:43:13.400159 | orchestrator | services: 2026-02-04 03:43:13.400171 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 68m) 2026-02-04 03:43:13.400184 | orchestrator | mgr: testbed-node-1(active, since 56m), standbys: testbed-node-0, testbed-node-2 2026-02-04 03:43:13.400197 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-04 03:43:13.400209 | orchestrator | osd: 6 osds: 6 up (since 64m), 6 in (since 65m) 2026-02-04 03:43:13.400221 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-04 03:43:13.400306 | orchestrator | 2026-02-04 03:43:13.400322 | orchestrator | data: 2026-02-04 03:43:13.400333 | orchestrator | volumes: 1/1 healthy 2026-02-04 03:43:13.400344 | orchestrator | pools: 14 pools, 401 pgs 2026-02-04 03:43:13.400356 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-04 03:43:13.400367 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-04 03:43:13.400378 | orchestrator | pgs: 401 active+clean 2026-02-04 03:43:13.400389 | orchestrator | 2026-02-04 03:43:13.443867 | orchestrator | 2026-02-04 03:43:13.443968 | orchestrator | # Ceph versions 2026-02-04 03:43:13.443985 | orchestrator | 2026-02-04 03:43:13.443997 | orchestrator | + echo 2026-02-04 03:43:13.444009 | orchestrator | + echo '# Ceph versions' 2026-02-04 03:43:13.444021 | orchestrator | + echo 2026-02-04 03:43:13.444032 | orchestrator | + ceph versions 2026-02-04 03:43:14.026182 | orchestrator | { 2026-02-04 03:43:14.026330 | orchestrator | "mon": { 2026-02-04 03:43:14.026349 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-04 03:43:14.026362 | orchestrator | }, 2026-02-04 03:43:14.026374 | orchestrator | "mgr": { 2026-02-04 03:43:14.026386 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-04 03:43:14.026397 | orchestrator | }, 2026-02-04 03:43:14.026408 | orchestrator | "osd": { 2026-02-04 03:43:14.026419 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-04 03:43:14.026430 | orchestrator | }, 2026-02-04 03:43:14.026441 | orchestrator | "mds": { 2026-02-04 03:43:14.026452 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-04 03:43:14.026462 | orchestrator | }, 2026-02-04 03:43:14.026473 | orchestrator | "rgw": { 2026-02-04 03:43:14.026484 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-04 03:43:14.026495 | orchestrator | }, 2026-02-04 03:43:14.026506 | orchestrator | "overall": { 2026-02-04 03:43:14.026517 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-04 03:43:14.026528 | orchestrator | } 2026-02-04 03:43:14.026539 | orchestrator | } 2026-02-04 03:43:14.081408 | orchestrator | 2026-02-04 03:43:14.081503 | orchestrator | # Ceph OSD tree 2026-02-04 03:43:14.081517 | orchestrator | 2026-02-04 03:43:14.081530 | orchestrator | + echo 2026-02-04 03:43:14.081542 | orchestrator | + echo '# Ceph OSD tree' 2026-02-04 03:43:14.081554 | orchestrator | + echo 2026-02-04 03:43:14.081566 | orchestrator | + ceph osd df tree 2026-02-04 03:43:14.628948 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-04 03:43:14.629056 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 398 MiB 113 GiB 5.89 1.00 - root default 2026-02-04 03:43:14.629070 | orchestrator | -5 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-3 2026-02-04 03:43:14.629081 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.18 1.05 199 up osd.0 2026-02-04 03:43:14.629092 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 66 MiB 19 GiB 5.58 0.95 193 up osd.5 2026-02-04 03:43:14.629103 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-4 2026-02-04 03:43:14.629113 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.2 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.16 1.05 192 up osd.1 2026-02-04 03:43:14.629149 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.1 GiB 1 KiB 62 MiB 19 GiB 5.60 0.95 196 up osd.4 2026-02-04 03:43:14.629161 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2026-02-04 03:43:14.629172 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 78 MiB 19 GiB 5.40 0.92 196 up osd.2 2026-02-04 03:43:14.629183 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 66 MiB 19 GiB 6.43 1.09 194 up osd.3 2026-02-04 03:43:14.629193 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 398 MiB 113 GiB 5.89 2026-02-04 03:43:14.629204 | orchestrator | MIN/MAX VAR: 0.92/1.09 STDDEV: 0.38 2026-02-04 03:43:14.673663 | orchestrator | 2026-02-04 03:43:14.673749 | orchestrator | # Ceph monitor status 2026-02-04 03:43:14.673764 | orchestrator | 2026-02-04 03:43:14.673775 | orchestrator | + echo 2026-02-04 03:43:14.673787 | orchestrator | + echo '# Ceph monitor status' 2026-02-04 03:43:14.673798 | orchestrator | + echo 2026-02-04 03:43:14.673810 | orchestrator | + ceph mon stat 2026-02-04 03:43:15.246262 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-04 03:43:15.291185 | orchestrator | 2026-02-04 03:43:15.291312 | orchestrator | # Ceph quorum status 2026-02-04 03:43:15.291328 | orchestrator | 2026-02-04 03:43:15.291340 | orchestrator | + echo 2026-02-04 03:43:15.291352 | orchestrator | + echo '# Ceph quorum status' 2026-02-04 03:43:15.291364 | orchestrator | + echo 2026-02-04 03:43:15.291801 | orchestrator | + ceph quorum_status 2026-02-04 03:43:15.292265 | orchestrator | + jq 2026-02-04 03:43:15.992768 | orchestrator | { 2026-02-04 03:43:15.992855 | orchestrator | "election_epoch": 8, 2026-02-04 03:43:15.992868 | orchestrator | "quorum": [ 2026-02-04 03:43:15.992876 | orchestrator | 0, 2026-02-04 03:43:15.992884 | orchestrator | 1, 2026-02-04 03:43:15.992891 | orchestrator | 2 2026-02-04 03:43:15.992899 | orchestrator | ], 2026-02-04 03:43:15.992906 | orchestrator | "quorum_names": [ 2026-02-04 03:43:15.992914 | orchestrator | "testbed-node-0", 2026-02-04 03:43:15.992921 | orchestrator | "testbed-node-1", 2026-02-04 03:43:15.992929 | orchestrator | "testbed-node-2" 2026-02-04 03:43:15.992937 | orchestrator | ], 2026-02-04 03:43:15.992944 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-04 03:43:15.992953 | orchestrator | "quorum_age": 4111, 2026-02-04 03:43:15.992961 | orchestrator | "features": { 2026-02-04 03:43:15.992969 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-04 03:43:15.992976 | orchestrator | "quorum_mon": [ 2026-02-04 03:43:15.992983 | orchestrator | "kraken", 2026-02-04 03:43:15.992991 | orchestrator | "luminous", 2026-02-04 03:43:15.992998 | orchestrator | "mimic", 2026-02-04 03:43:15.993006 | orchestrator | "osdmap-prune", 2026-02-04 03:43:15.993013 | orchestrator | "nautilus", 2026-02-04 03:43:15.993020 | orchestrator | "octopus", 2026-02-04 03:43:15.993027 | orchestrator | "pacific", 2026-02-04 03:43:15.993034 | orchestrator | "elector-pinging", 2026-02-04 03:43:15.993042 | orchestrator | "quincy", 2026-02-04 03:43:15.993049 | orchestrator | "reef" 2026-02-04 03:43:15.993056 | orchestrator | ] 2026-02-04 03:43:15.993065 | orchestrator | }, 2026-02-04 03:43:15.993077 | orchestrator | "monmap": { 2026-02-04 03:43:15.993090 | orchestrator | "epoch": 1, 2026-02-04 03:43:15.993102 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-04 03:43:15.993114 | orchestrator | "modified": "2026-02-04T02:34:26.568054Z", 2026-02-04 03:43:15.993127 | orchestrator | "created": "2026-02-04T02:34:26.568054Z", 2026-02-04 03:43:15.993139 | orchestrator | "min_mon_release": 18, 2026-02-04 03:43:15.993151 | orchestrator | "min_mon_release_name": "reef", 2026-02-04 03:43:15.993162 | orchestrator | "election_strategy": 1, 2026-02-04 03:43:15.993173 | orchestrator | "disallowed_leaders: ": "", 2026-02-04 03:43:15.993184 | orchestrator | "stretch_mode": false, 2026-02-04 03:43:15.993197 | orchestrator | "tiebreaker_mon": "", 2026-02-04 03:43:15.993210 | orchestrator | "removed_ranks: ": "", 2026-02-04 03:43:15.993223 | orchestrator | "features": { 2026-02-04 03:43:15.993261 | orchestrator | "persistent": [ 2026-02-04 03:43:15.993274 | orchestrator | "kraken", 2026-02-04 03:43:15.993286 | orchestrator | "luminous", 2026-02-04 03:43:15.993321 | orchestrator | "mimic", 2026-02-04 03:43:15.993329 | orchestrator | "osdmap-prune", 2026-02-04 03:43:15.993336 | orchestrator | "nautilus", 2026-02-04 03:43:15.993343 | orchestrator | "octopus", 2026-02-04 03:43:15.993350 | orchestrator | "pacific", 2026-02-04 03:43:15.993358 | orchestrator | "elector-pinging", 2026-02-04 03:43:15.993365 | orchestrator | "quincy", 2026-02-04 03:43:15.993372 | orchestrator | "reef" 2026-02-04 03:43:15.993379 | orchestrator | ], 2026-02-04 03:43:15.993386 | orchestrator | "optional": [] 2026-02-04 03:43:15.993393 | orchestrator | }, 2026-02-04 03:43:15.993401 | orchestrator | "mons": [ 2026-02-04 03:43:15.993408 | orchestrator | { 2026-02-04 03:43:15.993429 | orchestrator | "rank": 0, 2026-02-04 03:43:15.993437 | orchestrator | "name": "testbed-node-0", 2026-02-04 03:43:15.993445 | orchestrator | "public_addrs": { 2026-02-04 03:43:15.993452 | orchestrator | "addrvec": [ 2026-02-04 03:43:15.993459 | orchestrator | { 2026-02-04 03:43:15.993467 | orchestrator | "type": "v2", 2026-02-04 03:43:15.993475 | orchestrator | "addr": "192.168.16.8:3300", 2026-02-04 03:43:15.993482 | orchestrator | "nonce": 0 2026-02-04 03:43:15.993489 | orchestrator | }, 2026-02-04 03:43:15.993497 | orchestrator | { 2026-02-04 03:43:15.993504 | orchestrator | "type": "v1", 2026-02-04 03:43:15.993511 | orchestrator | "addr": "192.168.16.8:6789", 2026-02-04 03:43:15.993518 | orchestrator | "nonce": 0 2026-02-04 03:43:15.993526 | orchestrator | } 2026-02-04 03:43:15.993533 | orchestrator | ] 2026-02-04 03:43:15.993540 | orchestrator | }, 2026-02-04 03:43:15.993547 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-02-04 03:43:15.993554 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-02-04 03:43:15.993561 | orchestrator | "priority": 0, 2026-02-04 03:43:15.993569 | orchestrator | "weight": 0, 2026-02-04 03:43:15.993576 | orchestrator | "crush_location": "{}" 2026-02-04 03:43:15.993583 | orchestrator | }, 2026-02-04 03:43:15.993590 | orchestrator | { 2026-02-04 03:43:15.993597 | orchestrator | "rank": 1, 2026-02-04 03:43:15.993605 | orchestrator | "name": "testbed-node-1", 2026-02-04 03:43:15.993612 | orchestrator | "public_addrs": { 2026-02-04 03:43:15.993619 | orchestrator | "addrvec": [ 2026-02-04 03:43:15.993626 | orchestrator | { 2026-02-04 03:43:15.993633 | orchestrator | "type": "v2", 2026-02-04 03:43:15.993641 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-04 03:43:15.993648 | orchestrator | "nonce": 0 2026-02-04 03:43:15.993655 | orchestrator | }, 2026-02-04 03:43:15.993662 | orchestrator | { 2026-02-04 03:43:15.993669 | orchestrator | "type": "v1", 2026-02-04 03:43:15.993677 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-04 03:43:15.993684 | orchestrator | "nonce": 0 2026-02-04 03:43:15.993691 | orchestrator | } 2026-02-04 03:43:15.993698 | orchestrator | ] 2026-02-04 03:43:15.993705 | orchestrator | }, 2026-02-04 03:43:15.993712 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-04 03:43:15.993720 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-04 03:43:15.993727 | orchestrator | "priority": 0, 2026-02-04 03:43:15.993734 | orchestrator | "weight": 0, 2026-02-04 03:43:15.993741 | orchestrator | "crush_location": "{}" 2026-02-04 03:43:15.993750 | orchestrator | }, 2026-02-04 03:43:15.993762 | orchestrator | { 2026-02-04 03:43:15.993774 | orchestrator | "rank": 2, 2026-02-04 03:43:15.993785 | orchestrator | "name": "testbed-node-2", 2026-02-04 03:43:15.993796 | orchestrator | "public_addrs": { 2026-02-04 03:43:15.993806 | orchestrator | "addrvec": [ 2026-02-04 03:43:15.993817 | orchestrator | { 2026-02-04 03:43:15.993830 | orchestrator | "type": "v2", 2026-02-04 03:43:15.993839 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-04 03:43:15.993845 | orchestrator | "nonce": 0 2026-02-04 03:43:15.993852 | orchestrator | }, 2026-02-04 03:43:15.993859 | orchestrator | { 2026-02-04 03:43:15.993865 | orchestrator | "type": "v1", 2026-02-04 03:43:15.993872 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-04 03:43:15.993879 | orchestrator | "nonce": 0 2026-02-04 03:43:15.993886 | orchestrator | } 2026-02-04 03:43:15.993892 | orchestrator | ] 2026-02-04 03:43:15.993899 | orchestrator | }, 2026-02-04 03:43:15.993906 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-04 03:43:15.993913 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-04 03:43:15.993919 | orchestrator | "priority": 0, 2026-02-04 03:43:15.993935 | orchestrator | "weight": 0, 2026-02-04 03:43:15.993942 | orchestrator | "crush_location": "{}" 2026-02-04 03:43:15.993949 | orchestrator | } 2026-02-04 03:43:15.993956 | orchestrator | ] 2026-02-04 03:43:15.993962 | orchestrator | } 2026-02-04 03:43:15.993969 | orchestrator | } 2026-02-04 03:43:15.993976 | orchestrator | 2026-02-04 03:43:15.993982 | orchestrator | # Ceph free space status 2026-02-04 03:43:15.993989 | orchestrator | 2026-02-04 03:43:15.993996 | orchestrator | + echo 2026-02-04 03:43:15.994003 | orchestrator | + echo '# Ceph free space status' 2026-02-04 03:43:15.994009 | orchestrator | + echo 2026-02-04 03:43:15.994063 | orchestrator | + ceph df 2026-02-04 03:43:16.653639 | orchestrator | --- RAW STORAGE --- 2026-02-04 03:43:16.653771 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-04 03:43:16.653803 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-02-04 03:43:16.653829 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.89 2026-02-04 03:43:16.653841 | orchestrator | 2026-02-04 03:43:16.653852 | orchestrator | --- POOLS --- 2026-02-04 03:43:16.653864 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-04 03:43:16.653877 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2026-02-04 03:43:16.653888 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-04 03:43:16.653899 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-04 03:43:16.653909 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-04 03:43:16.653920 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-04 03:43:16.653932 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-04 03:43:16.653943 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-04 03:43:16.653954 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-04 03:43:16.653965 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2026-02-04 03:43:16.653976 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-04 03:43:16.653987 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-04 03:43:16.653997 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.90 35 GiB 2026-02-04 03:43:16.654008 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-04 03:43:16.654075 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-04 03:43:16.700593 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-04 03:43:16.763547 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-04 03:43:16.763640 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-04 03:43:16.763656 | orchestrator | + osism apply facts 2026-02-04 03:43:24.751518 | orchestrator | 2026-02-04 03:43:24 | INFO  | Task 243ba191-7920-4b3b-82a0-93f53f805f6d (facts) was prepared for execution. 2026-02-04 03:43:24.751634 | orchestrator | 2026-02-04 03:43:24 | INFO  | It takes a moment until task 243ba191-7920-4b3b-82a0-93f53f805f6d (facts) has been started and output is visible here. 2026-02-04 03:43:38.389593 | orchestrator | 2026-02-04 03:43:38.389711 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-04 03:43:38.389729 | orchestrator | 2026-02-04 03:43:38.389742 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 03:43:38.389754 | orchestrator | Wednesday 04 February 2026 03:43:29 +0000 (0:00:00.289) 0:00:00.289 **** 2026-02-04 03:43:38.389766 | orchestrator | ok: [testbed-manager] 2026-02-04 03:43:38.389779 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:43:38.389790 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:43:38.389801 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:43:38.389813 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:43:38.389824 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:43:38.389835 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:43:38.389846 | orchestrator | 2026-02-04 03:43:38.389858 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 03:43:38.389895 | orchestrator | Wednesday 04 February 2026 03:43:30 +0000 (0:00:01.164) 0:00:01.453 **** 2026-02-04 03:43:38.389907 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:43:38.389920 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:43:38.389931 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:43:38.389942 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:43:38.389953 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:43:38.389964 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:43:38.389975 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:43:38.389986 | orchestrator | 2026-02-04 03:43:38.389997 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 03:43:38.390009 | orchestrator | 2026-02-04 03:43:38.390085 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 03:43:38.390097 | orchestrator | Wednesday 04 February 2026 03:43:31 +0000 (0:00:01.388) 0:00:02.842 **** 2026-02-04 03:43:38.390108 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:43:38.390119 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:43:38.390129 | orchestrator | ok: [testbed-manager] 2026-02-04 03:43:38.390142 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:43:38.390155 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:43:38.390168 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:43:38.390180 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:43:38.390192 | orchestrator | 2026-02-04 03:43:38.390205 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 03:43:38.390217 | orchestrator | 2026-02-04 03:43:38.390231 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 03:43:38.390266 | orchestrator | Wednesday 04 February 2026 03:43:37 +0000 (0:00:05.532) 0:00:08.374 **** 2026-02-04 03:43:38.390279 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:43:38.390292 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:43:38.390304 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:43:38.390317 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:43:38.390330 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:43:38.390343 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:43:38.390356 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:43:38.390369 | orchestrator | 2026-02-04 03:43:38.390382 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:43:38.390395 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:43:38.390409 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:43:38.390422 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:43:38.390448 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:43:38.390462 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:43:38.390475 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:43:38.390488 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:43:38.390499 | orchestrator | 2026-02-04 03:43:38.390510 | orchestrator | 2026-02-04 03:43:38.390521 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:43:38.390532 | orchestrator | Wednesday 04 February 2026 03:43:37 +0000 (0:00:00.621) 0:00:08.995 **** 2026-02-04 03:43:38.390543 | orchestrator | =============================================================================== 2026-02-04 03:43:38.390554 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.53s 2026-02-04 03:43:38.390574 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2026-02-04 03:43:38.390585 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.16s 2026-02-04 03:43:38.390596 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.62s 2026-02-04 03:43:38.700741 | orchestrator | + osism validate ceph-mons 2026-02-04 03:44:11.067607 | orchestrator | 2026-02-04 03:44:11.067713 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-04 03:44:11.067727 | orchestrator | 2026-02-04 03:44:11.067738 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-04 03:44:11.067748 | orchestrator | Wednesday 04 February 2026 03:43:55 +0000 (0:00:00.497) 0:00:00.497 **** 2026-02-04 03:44:11.067758 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:11.067767 | orchestrator | 2026-02-04 03:44:11.067776 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-04 03:44:11.067785 | orchestrator | Wednesday 04 February 2026 03:43:56 +0000 (0:00:00.847) 0:00:01.345 **** 2026-02-04 03:44:11.067794 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:11.067803 | orchestrator | 2026-02-04 03:44:11.067812 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-04 03:44:11.067821 | orchestrator | Wednesday 04 February 2026 03:43:57 +0000 (0:00:00.995) 0:00:02.341 **** 2026-02-04 03:44:11.067830 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.067840 | orchestrator | 2026-02-04 03:44:11.067849 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-04 03:44:11.067858 | orchestrator | Wednesday 04 February 2026 03:43:57 +0000 (0:00:00.134) 0:00:02.475 **** 2026-02-04 03:44:11.067867 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.067876 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:11.067884 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:11.067893 | orchestrator | 2026-02-04 03:44:11.067902 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-04 03:44:11.067911 | orchestrator | Wednesday 04 February 2026 03:43:57 +0000 (0:00:00.304) 0:00:02.780 **** 2026-02-04 03:44:11.067919 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:11.067928 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:11.067937 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.067946 | orchestrator | 2026-02-04 03:44:11.067954 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-04 03:44:11.067963 | orchestrator | Wednesday 04 February 2026 03:43:58 +0000 (0:00:00.990) 0:00:03.771 **** 2026-02-04 03:44:11.067972 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.067981 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:44:11.067990 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:44:11.067999 | orchestrator | 2026-02-04 03:44:11.068007 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-04 03:44:11.068016 | orchestrator | Wednesday 04 February 2026 03:43:59 +0000 (0:00:00.310) 0:00:04.081 **** 2026-02-04 03:44:11.068025 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068034 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:11.068042 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:11.068051 | orchestrator | 2026-02-04 03:44:11.068060 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-04 03:44:11.068068 | orchestrator | Wednesday 04 February 2026 03:43:59 +0000 (0:00:00.493) 0:00:04.574 **** 2026-02-04 03:44:11.068077 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068085 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:11.068094 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:11.068102 | orchestrator | 2026-02-04 03:44:11.068111 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-04 03:44:11.068120 | orchestrator | Wednesday 04 February 2026 03:43:59 +0000 (0:00:00.312) 0:00:04.887 **** 2026-02-04 03:44:11.068129 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068138 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:44:11.068169 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:44:11.068180 | orchestrator | 2026-02-04 03:44:11.068190 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-04 03:44:11.068201 | orchestrator | Wednesday 04 February 2026 03:44:00 +0000 (0:00:00.286) 0:00:05.173 **** 2026-02-04 03:44:11.068211 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068222 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:11.068232 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:11.068243 | orchestrator | 2026-02-04 03:44:11.068254 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-04 03:44:11.068295 | orchestrator | Wednesday 04 February 2026 03:44:00 +0000 (0:00:00.479) 0:00:05.653 **** 2026-02-04 03:44:11.068304 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068313 | orchestrator | 2026-02-04 03:44:11.068321 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-04 03:44:11.068331 | orchestrator | Wednesday 04 February 2026 03:44:00 +0000 (0:00:00.256) 0:00:05.909 **** 2026-02-04 03:44:11.068339 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068348 | orchestrator | 2026-02-04 03:44:11.068356 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-04 03:44:11.068365 | orchestrator | Wednesday 04 February 2026 03:44:01 +0000 (0:00:00.257) 0:00:06.167 **** 2026-02-04 03:44:11.068374 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068382 | orchestrator | 2026-02-04 03:44:11.068391 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:11.068400 | orchestrator | Wednesday 04 February 2026 03:44:01 +0000 (0:00:00.244) 0:00:06.412 **** 2026-02-04 03:44:11.068408 | orchestrator | 2026-02-04 03:44:11.068417 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:11.068425 | orchestrator | Wednesday 04 February 2026 03:44:01 +0000 (0:00:00.071) 0:00:06.484 **** 2026-02-04 03:44:11.068434 | orchestrator | 2026-02-04 03:44:11.068442 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:11.068451 | orchestrator | Wednesday 04 February 2026 03:44:01 +0000 (0:00:00.070) 0:00:06.554 **** 2026-02-04 03:44:11.068460 | orchestrator | 2026-02-04 03:44:11.068468 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-04 03:44:11.068477 | orchestrator | Wednesday 04 February 2026 03:44:01 +0000 (0:00:00.074) 0:00:06.629 **** 2026-02-04 03:44:11.068485 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068494 | orchestrator | 2026-02-04 03:44:11.068503 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-04 03:44:11.068527 | orchestrator | Wednesday 04 February 2026 03:44:01 +0000 (0:00:00.251) 0:00:06.881 **** 2026-02-04 03:44:11.068537 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068545 | orchestrator | 2026-02-04 03:44:11.068570 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-04 03:44:11.068579 | orchestrator | Wednesday 04 February 2026 03:44:02 +0000 (0:00:00.246) 0:00:07.127 **** 2026-02-04 03:44:11.068588 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068596 | orchestrator | 2026-02-04 03:44:11.068605 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-04 03:44:11.068614 | orchestrator | Wednesday 04 February 2026 03:44:02 +0000 (0:00:00.139) 0:00:07.267 **** 2026-02-04 03:44:11.068622 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:44:11.068635 | orchestrator | 2026-02-04 03:44:11.068644 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-04 03:44:11.068653 | orchestrator | Wednesday 04 February 2026 03:44:03 +0000 (0:00:01.570) 0:00:08.837 **** 2026-02-04 03:44:11.068662 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068670 | orchestrator | 2026-02-04 03:44:11.068679 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-04 03:44:11.068688 | orchestrator | Wednesday 04 February 2026 03:44:04 +0000 (0:00:00.508) 0:00:09.346 **** 2026-02-04 03:44:11.068696 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068713 | orchestrator | 2026-02-04 03:44:11.068722 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-04 03:44:11.068730 | orchestrator | Wednesday 04 February 2026 03:44:04 +0000 (0:00:00.129) 0:00:09.476 **** 2026-02-04 03:44:11.068739 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068748 | orchestrator | 2026-02-04 03:44:11.068756 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-04 03:44:11.068765 | orchestrator | Wednesday 04 February 2026 03:44:04 +0000 (0:00:00.332) 0:00:09.808 **** 2026-02-04 03:44:11.068774 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068782 | orchestrator | 2026-02-04 03:44:11.068791 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-04 03:44:11.068800 | orchestrator | Wednesday 04 February 2026 03:44:05 +0000 (0:00:00.311) 0:00:10.119 **** 2026-02-04 03:44:11.068809 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068817 | orchestrator | 2026-02-04 03:44:11.068826 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-04 03:44:11.068835 | orchestrator | Wednesday 04 February 2026 03:44:05 +0000 (0:00:00.124) 0:00:10.244 **** 2026-02-04 03:44:11.068843 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068852 | orchestrator | 2026-02-04 03:44:11.068861 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-04 03:44:11.068870 | orchestrator | Wednesday 04 February 2026 03:44:05 +0000 (0:00:00.130) 0:00:10.374 **** 2026-02-04 03:44:11.068878 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068887 | orchestrator | 2026-02-04 03:44:11.068896 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-04 03:44:11.068904 | orchestrator | Wednesday 04 February 2026 03:44:05 +0000 (0:00:00.129) 0:00:10.503 **** 2026-02-04 03:44:11.068913 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:44:11.068922 | orchestrator | 2026-02-04 03:44:11.068930 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-04 03:44:11.068939 | orchestrator | Wednesday 04 February 2026 03:44:06 +0000 (0:00:01.321) 0:00:11.825 **** 2026-02-04 03:44:11.068948 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.068956 | orchestrator | 2026-02-04 03:44:11.068965 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-04 03:44:11.068974 | orchestrator | Wednesday 04 February 2026 03:44:07 +0000 (0:00:00.324) 0:00:12.149 **** 2026-02-04 03:44:11.068982 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.068991 | orchestrator | 2026-02-04 03:44:11.069000 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-04 03:44:11.069009 | orchestrator | Wednesday 04 February 2026 03:44:07 +0000 (0:00:00.150) 0:00:12.300 **** 2026-02-04 03:44:11.069017 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:11.069026 | orchestrator | 2026-02-04 03:44:11.069035 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-04 03:44:11.069043 | orchestrator | Wednesday 04 February 2026 03:44:07 +0000 (0:00:00.155) 0:00:12.456 **** 2026-02-04 03:44:11.069052 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.069061 | orchestrator | 2026-02-04 03:44:11.069070 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-04 03:44:11.069078 | orchestrator | Wednesday 04 February 2026 03:44:07 +0000 (0:00:00.139) 0:00:12.595 **** 2026-02-04 03:44:11.069139 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.069151 | orchestrator | 2026-02-04 03:44:11.069160 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-04 03:44:11.069168 | orchestrator | Wednesday 04 February 2026 03:44:07 +0000 (0:00:00.341) 0:00:12.937 **** 2026-02-04 03:44:11.069177 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:11.069186 | orchestrator | 2026-02-04 03:44:11.069195 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-04 03:44:11.069203 | orchestrator | Wednesday 04 February 2026 03:44:08 +0000 (0:00:00.261) 0:00:13.198 **** 2026-02-04 03:44:11.069218 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:11.069227 | orchestrator | 2026-02-04 03:44:11.069236 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-04 03:44:11.069244 | orchestrator | Wednesday 04 February 2026 03:44:08 +0000 (0:00:00.284) 0:00:13.482 **** 2026-02-04 03:44:11.069253 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:11.069280 | orchestrator | 2026-02-04 03:44:11.069289 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-04 03:44:11.069298 | orchestrator | Wednesday 04 February 2026 03:44:10 +0000 (0:00:01.799) 0:00:15.282 **** 2026-02-04 03:44:11.069306 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:11.069315 | orchestrator | 2026-02-04 03:44:11.069324 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-04 03:44:11.069332 | orchestrator | Wednesday 04 February 2026 03:44:10 +0000 (0:00:00.293) 0:00:15.575 **** 2026-02-04 03:44:11.069341 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:11.069350 | orchestrator | 2026-02-04 03:44:11.069364 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:13.833508 | orchestrator | Wednesday 04 February 2026 03:44:10 +0000 (0:00:00.259) 0:00:15.834 **** 2026-02-04 03:44:13.833614 | orchestrator | 2026-02-04 03:44:13.833630 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:13.833642 | orchestrator | Wednesday 04 February 2026 03:44:10 +0000 (0:00:00.085) 0:00:15.920 **** 2026-02-04 03:44:13.833653 | orchestrator | 2026-02-04 03:44:13.833665 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:13.833676 | orchestrator | Wednesday 04 February 2026 03:44:10 +0000 (0:00:00.071) 0:00:15.992 **** 2026-02-04 03:44:13.833687 | orchestrator | 2026-02-04 03:44:13.833698 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-04 03:44:13.833709 | orchestrator | Wednesday 04 February 2026 03:44:11 +0000 (0:00:00.077) 0:00:16.069 **** 2026-02-04 03:44:13.833720 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:13.833731 | orchestrator | 2026-02-04 03:44:13.833742 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-04 03:44:13.833753 | orchestrator | Wednesday 04 February 2026 03:44:12 +0000 (0:00:01.539) 0:00:17.609 **** 2026-02-04 03:44:13.833763 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-04 03:44:13.833774 | orchestrator |  "msg": [ 2026-02-04 03:44:13.833787 | orchestrator |  "Validator run completed.", 2026-02-04 03:44:13.833799 | orchestrator |  "You can find the report file here:", 2026-02-04 03:44:13.833810 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-04T03:43:56+00:00-report.json", 2026-02-04 03:44:13.833821 | orchestrator |  "on the following host:", 2026-02-04 03:44:13.833832 | orchestrator |  "testbed-manager" 2026-02-04 03:44:13.833843 | orchestrator |  ] 2026-02-04 03:44:13.833854 | orchestrator | } 2026-02-04 03:44:13.833866 | orchestrator | 2026-02-04 03:44:13.833877 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:44:13.833889 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-04 03:44:13.833901 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:44:13.833913 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:44:13.833924 | orchestrator | 2026-02-04 03:44:13.833934 | orchestrator | 2026-02-04 03:44:13.833945 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:44:13.833956 | orchestrator | Wednesday 04 February 2026 03:44:13 +0000 (0:00:00.895) 0:00:18.505 **** 2026-02-04 03:44:13.833994 | orchestrator | =============================================================================== 2026-02-04 03:44:13.834005 | orchestrator | Aggregate test results step one ----------------------------------------- 1.80s 2026-02-04 03:44:13.834081 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.57s 2026-02-04 03:44:13.834096 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2026-02-04 03:44:13.834109 | orchestrator | Gather status data ------------------------------------------------------ 1.32s 2026-02-04 03:44:13.834122 | orchestrator | Create report output directory ------------------------------------------ 1.00s 2026-02-04 03:44:13.834135 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2026-02-04 03:44:13.834148 | orchestrator | Print report file information ------------------------------------------- 0.90s 2026-02-04 03:44:13.834160 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-02-04 03:44:13.834173 | orchestrator | Set quorum test data ---------------------------------------------------- 0.51s 2026-02-04 03:44:13.834186 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2026-02-04 03:44:13.834198 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.48s 2026-02-04 03:44:13.834224 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2026-02-04 03:44:13.834238 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2026-02-04 03:44:13.834250 | orchestrator | Set health test data ---------------------------------------------------- 0.32s 2026-02-04 03:44:13.834301 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-02-04 03:44:13.834315 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-02-04 03:44:13.834328 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-02-04 03:44:13.834340 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-04 03:44:13.834353 | orchestrator | Aggregate test results step two ----------------------------------------- 0.29s 2026-02-04 03:44:13.834366 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.29s 2026-02-04 03:44:14.162788 | orchestrator | + osism validate ceph-mgrs 2026-02-04 03:44:45.701162 | orchestrator | 2026-02-04 03:44:45.701274 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-04 03:44:45.701361 | orchestrator | 2026-02-04 03:44:45.701375 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-04 03:44:45.701388 | orchestrator | Wednesday 04 February 2026 03:44:30 +0000 (0:00:00.463) 0:00:00.463 **** 2026-02-04 03:44:45.701400 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:45.701412 | orchestrator | 2026-02-04 03:44:45.701423 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-04 03:44:45.701434 | orchestrator | Wednesday 04 February 2026 03:44:31 +0000 (0:00:00.874) 0:00:01.338 **** 2026-02-04 03:44:45.701446 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:45.701457 | orchestrator | 2026-02-04 03:44:45.701468 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-04 03:44:45.701479 | orchestrator | Wednesday 04 February 2026 03:44:32 +0000 (0:00:01.081) 0:00:02.420 **** 2026-02-04 03:44:45.701490 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.701503 | orchestrator | 2026-02-04 03:44:45.701514 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-04 03:44:45.701525 | orchestrator | Wednesday 04 February 2026 03:44:33 +0000 (0:00:00.132) 0:00:02.552 **** 2026-02-04 03:44:45.701536 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.701547 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:45.701558 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:45.701569 | orchestrator | 2026-02-04 03:44:45.701580 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-04 03:44:45.701591 | orchestrator | Wednesday 04 February 2026 03:44:33 +0000 (0:00:00.309) 0:00:02.861 **** 2026-02-04 03:44:45.701626 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.701637 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:45.701648 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:45.701661 | orchestrator | 2026-02-04 03:44:45.701675 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-04 03:44:45.701687 | orchestrator | Wednesday 04 February 2026 03:44:34 +0000 (0:00:01.039) 0:00:03.901 **** 2026-02-04 03:44:45.701700 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.701713 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:44:45.701726 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:44:45.701738 | orchestrator | 2026-02-04 03:44:45.701751 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-04 03:44:45.701764 | orchestrator | Wednesday 04 February 2026 03:44:34 +0000 (0:00:00.338) 0:00:04.240 **** 2026-02-04 03:44:45.701777 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.701789 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:45.701801 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:45.701814 | orchestrator | 2026-02-04 03:44:45.701827 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-04 03:44:45.701840 | orchestrator | Wednesday 04 February 2026 03:44:35 +0000 (0:00:00.515) 0:00:04.755 **** 2026-02-04 03:44:45.701852 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.701865 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:45.701877 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:45.701889 | orchestrator | 2026-02-04 03:44:45.701902 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-04 03:44:45.701915 | orchestrator | Wednesday 04 February 2026 03:44:35 +0000 (0:00:00.336) 0:00:05.091 **** 2026-02-04 03:44:45.701928 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.701940 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:44:45.701953 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:44:45.701965 | orchestrator | 2026-02-04 03:44:45.701978 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-04 03:44:45.701993 | orchestrator | Wednesday 04 February 2026 03:44:35 +0000 (0:00:00.299) 0:00:05.391 **** 2026-02-04 03:44:45.702013 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.702217 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:44:45.702231 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:44:45.702242 | orchestrator | 2026-02-04 03:44:45.702253 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-04 03:44:45.702264 | orchestrator | Wednesday 04 February 2026 03:44:36 +0000 (0:00:00.489) 0:00:05.880 **** 2026-02-04 03:44:45.702392 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.702406 | orchestrator | 2026-02-04 03:44:45.702417 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-04 03:44:45.702428 | orchestrator | Wednesday 04 February 2026 03:44:36 +0000 (0:00:00.259) 0:00:06.140 **** 2026-02-04 03:44:45.702439 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.702450 | orchestrator | 2026-02-04 03:44:45.702461 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-04 03:44:45.702473 | orchestrator | Wednesday 04 February 2026 03:44:36 +0000 (0:00:00.251) 0:00:06.391 **** 2026-02-04 03:44:45.702484 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.702494 | orchestrator | 2026-02-04 03:44:45.702505 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:45.702516 | orchestrator | Wednesday 04 February 2026 03:44:37 +0000 (0:00:00.275) 0:00:06.667 **** 2026-02-04 03:44:45.702534 | orchestrator | 2026-02-04 03:44:45.702555 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:45.702574 | orchestrator | Wednesday 04 February 2026 03:44:37 +0000 (0:00:00.077) 0:00:06.745 **** 2026-02-04 03:44:45.702594 | orchestrator | 2026-02-04 03:44:45.702613 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:45.702626 | orchestrator | Wednesday 04 February 2026 03:44:37 +0000 (0:00:00.075) 0:00:06.820 **** 2026-02-04 03:44:45.702651 | orchestrator | 2026-02-04 03:44:45.702662 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-04 03:44:45.702673 | orchestrator | Wednesday 04 February 2026 03:44:37 +0000 (0:00:00.073) 0:00:06.894 **** 2026-02-04 03:44:45.702684 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.702695 | orchestrator | 2026-02-04 03:44:45.702706 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-04 03:44:45.702717 | orchestrator | Wednesday 04 February 2026 03:44:37 +0000 (0:00:00.256) 0:00:07.151 **** 2026-02-04 03:44:45.702728 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.702739 | orchestrator | 2026-02-04 03:44:45.702791 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-04 03:44:45.702804 | orchestrator | Wednesday 04 February 2026 03:44:37 +0000 (0:00:00.252) 0:00:07.404 **** 2026-02-04 03:44:45.702815 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.702826 | orchestrator | 2026-02-04 03:44:45.702836 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-04 03:44:45.702847 | orchestrator | Wednesday 04 February 2026 03:44:38 +0000 (0:00:00.125) 0:00:07.529 **** 2026-02-04 03:44:45.702858 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:44:45.702869 | orchestrator | 2026-02-04 03:44:45.702880 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-04 03:44:45.702891 | orchestrator | Wednesday 04 February 2026 03:44:39 +0000 (0:00:01.865) 0:00:09.395 **** 2026-02-04 03:44:45.702902 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.702913 | orchestrator | 2026-02-04 03:44:45.702941 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-04 03:44:45.702952 | orchestrator | Wednesday 04 February 2026 03:44:40 +0000 (0:00:00.481) 0:00:09.877 **** 2026-02-04 03:44:45.702963 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.702974 | orchestrator | 2026-02-04 03:44:45.702985 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-04 03:44:45.702995 | orchestrator | Wednesday 04 February 2026 03:44:40 +0000 (0:00:00.332) 0:00:10.209 **** 2026-02-04 03:44:45.703006 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.703017 | orchestrator | 2026-02-04 03:44:45.703028 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-04 03:44:45.703038 | orchestrator | Wednesday 04 February 2026 03:44:40 +0000 (0:00:00.155) 0:00:10.365 **** 2026-02-04 03:44:45.703049 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:44:45.703060 | orchestrator | 2026-02-04 03:44:45.703071 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-04 03:44:45.703081 | orchestrator | Wednesday 04 February 2026 03:44:41 +0000 (0:00:00.164) 0:00:10.530 **** 2026-02-04 03:44:45.703092 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:45.703103 | orchestrator | 2026-02-04 03:44:45.703114 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-04 03:44:45.703124 | orchestrator | Wednesday 04 February 2026 03:44:41 +0000 (0:00:00.271) 0:00:10.801 **** 2026-02-04 03:44:45.703135 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:44:45.703146 | orchestrator | 2026-02-04 03:44:45.703192 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-04 03:44:45.703206 | orchestrator | Wednesday 04 February 2026 03:44:41 +0000 (0:00:00.278) 0:00:11.080 **** 2026-02-04 03:44:45.703218 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:45.703229 | orchestrator | 2026-02-04 03:44:45.703240 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-04 03:44:45.703251 | orchestrator | Wednesday 04 February 2026 03:44:42 +0000 (0:00:01.320) 0:00:12.401 **** 2026-02-04 03:44:45.703262 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:45.703273 | orchestrator | 2026-02-04 03:44:45.703318 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-04 03:44:45.703335 | orchestrator | Wednesday 04 February 2026 03:44:43 +0000 (0:00:00.263) 0:00:12.665 **** 2026-02-04 03:44:45.703354 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:45.703366 | orchestrator | 2026-02-04 03:44:45.703377 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:45.703388 | orchestrator | Wednesday 04 February 2026 03:44:43 +0000 (0:00:00.276) 0:00:12.941 **** 2026-02-04 03:44:45.703399 | orchestrator | 2026-02-04 03:44:45.703410 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:45.703421 | orchestrator | Wednesday 04 February 2026 03:44:43 +0000 (0:00:00.070) 0:00:13.011 **** 2026-02-04 03:44:45.703432 | orchestrator | 2026-02-04 03:44:45.703444 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:44:45.703455 | orchestrator | Wednesday 04 February 2026 03:44:43 +0000 (0:00:00.069) 0:00:13.081 **** 2026-02-04 03:44:45.703466 | orchestrator | 2026-02-04 03:44:45.703477 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-04 03:44:45.703488 | orchestrator | Wednesday 04 February 2026 03:44:43 +0000 (0:00:00.265) 0:00:13.346 **** 2026-02-04 03:44:45.703499 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-04 03:44:45.703510 | orchestrator | 2026-02-04 03:44:45.703521 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-04 03:44:45.703532 | orchestrator | Wednesday 04 February 2026 03:44:45 +0000 (0:00:01.389) 0:00:14.735 **** 2026-02-04 03:44:45.703543 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-04 03:44:45.703555 | orchestrator |  "msg": [ 2026-02-04 03:44:45.703567 | orchestrator |  "Validator run completed.", 2026-02-04 03:44:45.703584 | orchestrator |  "You can find the report file here:", 2026-02-04 03:44:45.703596 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-04T03:44:31+00:00-report.json", 2026-02-04 03:44:45.703608 | orchestrator |  "on the following host:", 2026-02-04 03:44:45.703619 | orchestrator |  "testbed-manager" 2026-02-04 03:44:45.703630 | orchestrator |  ] 2026-02-04 03:44:45.703642 | orchestrator | } 2026-02-04 03:44:45.703653 | orchestrator | 2026-02-04 03:44:45.703664 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:44:45.703677 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 03:44:45.703690 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:44:45.703711 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:44:46.056693 | orchestrator | 2026-02-04 03:44:46.056793 | orchestrator | 2026-02-04 03:44:46.056810 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:44:46.056824 | orchestrator | Wednesday 04 February 2026 03:44:45 +0000 (0:00:00.408) 0:00:15.144 **** 2026-02-04 03:44:46.056835 | orchestrator | =============================================================================== 2026-02-04 03:44:46.056846 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.87s 2026-02-04 03:44:46.056857 | orchestrator | Write report file ------------------------------------------------------- 1.39s 2026-02-04 03:44:46.056868 | orchestrator | Aggregate test results step one ----------------------------------------- 1.32s 2026-02-04 03:44:46.056879 | orchestrator | Create report output directory ------------------------------------------ 1.08s 2026-02-04 03:44:46.056890 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2026-02-04 03:44:46.056901 | orchestrator | Get timestamp for report file ------------------------------------------- 0.87s 2026-02-04 03:44:46.056912 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-02-04 03:44:46.056923 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.49s 2026-02-04 03:44:46.056960 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.48s 2026-02-04 03:44:46.056972 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-02-04 03:44:46.056982 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2026-02-04 03:44:46.056993 | orchestrator | Set test result to failed if container is missing ----------------------- 0.34s 2026-02-04 03:44:46.057004 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-02-04 03:44:46.057015 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2026-02-04 03:44:46.057026 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-02-04 03:44:46.057037 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2026-02-04 03:44:46.057048 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.28s 2026-02-04 03:44:46.057059 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-02-04 03:44:46.057069 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-02-04 03:44:46.057080 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2026-02-04 03:44:46.386083 | orchestrator | + osism validate ceph-osds 2026-02-04 03:45:07.888707 | orchestrator | 2026-02-04 03:45:07.888831 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-04 03:45:07.888856 | orchestrator | 2026-02-04 03:45:07.888870 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-04 03:45:07.888881 | orchestrator | Wednesday 04 February 2026 03:45:03 +0000 (0:00:00.431) 0:00:00.431 **** 2026-02-04 03:45:07.888891 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 03:45:07.888900 | orchestrator | 2026-02-04 03:45:07.888909 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-04 03:45:07.888918 | orchestrator | Wednesday 04 February 2026 03:45:03 +0000 (0:00:00.862) 0:00:01.294 **** 2026-02-04 03:45:07.888927 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 03:45:07.888937 | orchestrator | 2026-02-04 03:45:07.888946 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-04 03:45:07.888955 | orchestrator | Wednesday 04 February 2026 03:45:04 +0000 (0:00:00.537) 0:00:01.831 **** 2026-02-04 03:45:07.888964 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 03:45:07.888972 | orchestrator | 2026-02-04 03:45:07.888981 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-04 03:45:07.888990 | orchestrator | Wednesday 04 February 2026 03:45:05 +0000 (0:00:00.721) 0:00:02.552 **** 2026-02-04 03:45:07.888999 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:07.889010 | orchestrator | 2026-02-04 03:45:07.889020 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-04 03:45:07.889028 | orchestrator | Wednesday 04 February 2026 03:45:05 +0000 (0:00:00.147) 0:00:02.699 **** 2026-02-04 03:45:07.889038 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:07.889047 | orchestrator | 2026-02-04 03:45:07.889055 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-04 03:45:07.889065 | orchestrator | Wednesday 04 February 2026 03:45:05 +0000 (0:00:00.147) 0:00:02.847 **** 2026-02-04 03:45:07.889073 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:07.889082 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:07.889091 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:07.889100 | orchestrator | 2026-02-04 03:45:07.889134 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-04 03:45:07.889150 | orchestrator | Wednesday 04 February 2026 03:45:05 +0000 (0:00:00.334) 0:00:03.181 **** 2026-02-04 03:45:07.889165 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:07.889179 | orchestrator | 2026-02-04 03:45:07.889195 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-04 03:45:07.889237 | orchestrator | Wednesday 04 February 2026 03:45:06 +0000 (0:00:00.167) 0:00:03.349 **** 2026-02-04 03:45:07.889248 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:07.889258 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:07.889269 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:07.889279 | orchestrator | 2026-02-04 03:45:07.889369 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-04 03:45:07.889384 | orchestrator | Wednesday 04 February 2026 03:45:06 +0000 (0:00:00.360) 0:00:03.709 **** 2026-02-04 03:45:07.889394 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:07.889405 | orchestrator | 2026-02-04 03:45:07.889415 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-04 03:45:07.889426 | orchestrator | Wednesday 04 February 2026 03:45:07 +0000 (0:00:00.812) 0:00:04.522 **** 2026-02-04 03:45:07.889436 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:07.889447 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:07.889457 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:07.889471 | orchestrator | 2026-02-04 03:45:07.889486 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-04 03:45:07.889501 | orchestrator | Wednesday 04 February 2026 03:45:07 +0000 (0:00:00.326) 0:00:04.849 **** 2026-02-04 03:45:07.889519 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5224cb6994e1691a4c0f14716e4e9b0eb5aed0e60363b43547cfd7064549667c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-04 03:45:07.889538 | orchestrator | skipping: [testbed-node-3] => (item={'id': '45b5ec0cbc82583e76af3204a7c81848f0338da8acc8a963f3635390f1173fec', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-04 03:45:07.889556 | orchestrator | skipping: [testbed-node-3] => (item={'id': '38c8388a08154ff6b420dea0235d820f28ca6ce04da9b011435b18ab66c87897', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-04 03:45:07.889574 | orchestrator | skipping: [testbed-node-3] => (item={'id': '12f18eadde7e3f373e93bb34fa50609929a4b919312e6f99efc6bd4dbbb4cac7', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-04 03:45:07.889590 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6fc2719b13589fb202de2ac0579ca4f0b0de32e622b45e5b7671774d00682bbc', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-04 03:45:07.889631 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f295865d861a0bf73efcc9849cbb7890cd53e0a939af428e6db8f64c618bad79', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-04 03:45:07.889641 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f463815ea8f4bcc2375d5e3f8de5fa7c83264a58309c983045fb708d69eb2111', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-04 03:45:07.889650 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0f64b5be9f6571f3cfd373802b6b396db8d47e85a335671fde670bd89c010498', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-04 03:45:07.889659 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cb371fb7f80d745c49d23de3ea2adad432c02e47731b3859aaaddaea89cc973b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:07.889680 | orchestrator | skipping: [testbed-node-3] => (item={'id': '51378afcd1c2c73af37daa258480a0dba72ad6fa6e52196a70685ef0722035ed', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:07.889689 | orchestrator | skipping: [testbed-node-3] => (item={'id': '10bc2fcb8b0b7b170187cfd08ccf051313c94d9a3b78e34be08f894cbc119336', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:07.889700 | orchestrator | ok: [testbed-node-3] => (item={'id': '9de49b099a9184cba7efd780c50fefe0f14f7e0f41449a93d90af853aebfb07b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-04 03:45:07.889710 | orchestrator | ok: [testbed-node-3] => (item={'id': '71ba0535ff9fdb1f978557f27fa6e415ea60e8c37ee421cafc348fae2257c4c7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-04 03:45:07.889719 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c5447e8d13f275fdb43c434108ad3e0741fdd470f7833c46e37c95677da3cfe9', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-04 03:45:07.889728 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4f7d46ccaf7bdc3e6285b85c22016b97cb6c5f0389a0b9aa248f2cb778c42a31', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:07.889737 | orchestrator | skipping: [testbed-node-4] => (item={'id': '67a741c56488a73978545d450e4bd660e14f6acdac3f6826ed470b112c1487ac', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-04 03:45:07.889746 | orchestrator | skipping: [testbed-node-3] => (item={'id': '23fbd75b13357324c70b1705740dc7809be04e9e68c1ab13d9ce3853994f51ec', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-04 03:45:07.889755 | orchestrator | skipping: [testbed-node-4] => (item={'id': '90acf089c785415f69de213de05d2cdc0acf9734dce9ba2c039f940f16a1032c', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-04 03:45:07.889764 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c138a4c2b00574ac37d3c0052ff738a2fe53946c9715ff2b2d4614bf37de69e0', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-04 03:45:07.889773 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a52858dd963657f90eb280122656c816a61158b467b9212bb95feea777d22638', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:07.889788 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a4f925c5d36efcad285a64bafd469636f0c00b9a1fb93282ef786d20aaa4418d', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-04 03:45:08.034104 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'bcc4e30bf1b3429a873c5d251632eb3c3dc14aabbbc000a2266476b3ac770cac', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:08.034264 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7d882554abf218df2478835ed8484abd55690824ed803107def2625364b4dfa6', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-04 03:45:08.034352 | orchestrator | skipping: [testbed-node-3] => (item={'id': '625ba2e0ee6ea188919903d772ca240b5b3d2d7532db7310990ef08d37556dd6', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:08.034366 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b2107df8263f086f59a0493ab2bcc83dfe132320e5ee50e8a5c4babf5973ad5c', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-04 03:45:08.034382 | orchestrator | skipping: [testbed-node-4] => (item={'id': '24d307f203e119d4412f9ab39c230705ebd8e7f50ec02bd775026b6570ea0d67', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-04 03:45:08.034393 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3c844b0c026be1f0e27b0c21cf3ea5f5816c8b95b12c2797f91d18baa0e86b0f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-04 03:45:08.034406 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7320ea8c0eb6bbf895aeeaa80272e669076b45580eab3a6fef938195277328dc', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:08.034426 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9f2534291772a8cd867f04f6b291873b90107b3c7eb09ed4442542789760c1c2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:08.034443 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7453078da0bebbe29fc7aebc0a21aa1b955b019d2d721213b8ab58cdbb45358d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:08.034462 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd650dfd7de4ad45843b357afe7a3d9dd22aed196b9fee6241cc9246387461459', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-04 03:45:08.034480 | orchestrator | ok: [testbed-node-4] => (item={'id': '040bf67f1af588161673b7f3e1417faba6ab60d9e08c208ad1a6dc08c161b6a9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-04 03:45:08.034498 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5838b8db3656ae097b1d33f6fa728c3e0e565530c64caed2ddc0c9cb952ad3b6', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:08.034516 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5f9dc08eb011341af763a7d9274bd0a2eb814543fc92e30fcb8c454c772e747a', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-04 03:45:08.034535 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0d31050d6a6b24f1595c063575f70b80638fed894ecf6e854f49c7115b2c277c', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-04 03:45:08.034573 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b0476a82554ca83dc302e162fc07a7d4735ac5b61ef7d24e61fea2285ee5ba9e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:08.034596 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4ef696f868cc430dac1762dc2b624c3809310cd7aa261beb39b58212d01fa07c', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:08.034609 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd7f6c44852e6a36b2ae3e6d6de8ef98cfe4e39af78eba618ada9df589e4e2af6', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:08.034621 | orchestrator | skipping: [testbed-node-5] => (item={'id': '44d5c91559a6b5c6d178e269092393ff890ed2d1ed0fb973b8835a6969a098a6', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-04 03:45:08.034634 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2432a6a4b0825229a80e14f7dab8721702d629777e4588875953781498ac0ec1', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-04 03:45:08.034652 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ca4cf8d06cb1f91bf8bc00d135e7490f7609cabac581f1b130e948ffd0238df4', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2026-02-04 03:45:08.034665 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dbf1da49387795eca0fb23355991074b4341c536637dac8eb59b1f3f49b01e69', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-04 03:45:08.034677 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'da38a317a2ccd375cc99e1e386a5ff7d82483a5b5eb25ccd5d36ac4d37b4620d', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-04 03:45:08.034688 | orchestrator | skipping: [testbed-node-5] => (item={'id': '47fd746f93c04cd39f8fc962c2a74fc276d38cf2103aeead170e97f6d1127a8d', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-04 03:45:08.034701 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bf48db1b9313450a060f9bab09d5836e305b2fbbc8884e0168371701cfe4e1ee', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-04 03:45:08.034713 | orchestrator | skipping: [testbed-node-5] => (item={'id': '232f21aafb6d4f57419e0342d391103ced9b92058d73b2140a5c7d001a942e15', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-04 03:45:08.034726 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7c3fcb798479b9c79aafc65889ea4b090e43439238b529587d22a7eb3329844f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:08.034738 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a709b7d6495c0660169450592b61f960df7bf4c4064662872a1a7c857d6e0821', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:08.034750 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b72ab666b493a4f4d5b2800b463ba10eea972c4a7ecb9068985c5b2808245142', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:08.034768 | orchestrator | ok: [testbed-node-5] => (item={'id': '4a03b0eb1cad65a6ce5ff6c950813e8890ccb48e4132832b01ac018cd8459b40', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-04 03:45:08.034789 | orchestrator | ok: [testbed-node-5] => (item={'id': '19da6e1b2563d043ab99ff42a69182057e5e83018c99dad1cb981d31491ebbe5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-04 03:45:19.365967 | orchestrator | skipping: [testbed-node-5] => (item={'id': '23529f58f66cb78f897bdd318b442d1a24be915c4381cf863c7570fe54e8dfa5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-04 03:45:19.366178 | orchestrator | skipping: [testbed-node-5] => (item={'id': '87e1532b5759faa16ee980d09c597e186bd5b8536dc35035f177127eac45df42', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-04 03:45:19.366202 | orchestrator | skipping: [testbed-node-5] => (item={'id': '34a95df3ec6128a707dec11b7d8a4dd2c5647d2b0ef33b8c07d6102ae5ab859b', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-04 03:45:19.366216 | orchestrator | skipping: [testbed-node-5] => (item={'id': '457d46e10f759aaf30aab4ac671c4d79dd0ff5aaa50db50cffcb94d349fc6804', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:19.366246 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c029af0d6f28f7e52bbcdbe0dd8853c23d319891e7f02f106e573db1b9268d39', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:19.366259 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a6528181ac72df526c603def8aceb7a3885b8cbf70a7474d548d0c2e4f0c7d79', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-04 03:45:19.366270 | orchestrator | 2026-02-04 03:45:19.366283 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-04 03:45:19.366352 | orchestrator | Wednesday 04 February 2026 03:45:08 +0000 (0:00:00.490) 0:00:05.339 **** 2026-02-04 03:45:19.366367 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.366379 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:19.366390 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:19.366401 | orchestrator | 2026-02-04 03:45:19.366412 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-04 03:45:19.366423 | orchestrator | Wednesday 04 February 2026 03:45:08 +0000 (0:00:00.320) 0:00:05.660 **** 2026-02-04 03:45:19.366434 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.366446 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:19.366457 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:19.366468 | orchestrator | 2026-02-04 03:45:19.366479 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-04 03:45:19.366490 | orchestrator | Wednesday 04 February 2026 03:45:08 +0000 (0:00:00.504) 0:00:06.165 **** 2026-02-04 03:45:19.366504 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.366516 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:19.366529 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:19.366542 | orchestrator | 2026-02-04 03:45:19.366554 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-04 03:45:19.366567 | orchestrator | Wednesday 04 February 2026 03:45:09 +0000 (0:00:00.312) 0:00:06.478 **** 2026-02-04 03:45:19.366579 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.366592 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:19.366606 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:19.366640 | orchestrator | 2026-02-04 03:45:19.366732 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-04 03:45:19.366746 | orchestrator | Wednesday 04 February 2026 03:45:09 +0000 (0:00:00.294) 0:00:06.772 **** 2026-02-04 03:45:19.366757 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-04 03:45:19.366770 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-04 03:45:19.366780 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.366792 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-04 03:45:19.366803 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-04 03:45:19.366813 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:19.366824 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-04 03:45:19.366835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-04 03:45:19.366845 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:19.366856 | orchestrator | 2026-02-04 03:45:19.366868 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-04 03:45:19.366879 | orchestrator | Wednesday 04 February 2026 03:45:09 +0000 (0:00:00.334) 0:00:07.107 **** 2026-02-04 03:45:19.366890 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.366900 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:19.366911 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:19.366922 | orchestrator | 2026-02-04 03:45:19.366932 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-04 03:45:19.366943 | orchestrator | Wednesday 04 February 2026 03:45:10 +0000 (0:00:00.514) 0:00:07.622 **** 2026-02-04 03:45:19.366954 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.366988 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:19.367000 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:19.367011 | orchestrator | 2026-02-04 03:45:19.367022 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-04 03:45:19.367033 | orchestrator | Wednesday 04 February 2026 03:45:10 +0000 (0:00:00.309) 0:00:07.932 **** 2026-02-04 03:45:19.367044 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.367055 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:19.367066 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:19.367077 | orchestrator | 2026-02-04 03:45:19.367088 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-04 03:45:19.367098 | orchestrator | Wednesday 04 February 2026 03:45:10 +0000 (0:00:00.304) 0:00:08.236 **** 2026-02-04 03:45:19.367109 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.367120 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:19.367131 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:19.367141 | orchestrator | 2026-02-04 03:45:19.367163 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-04 03:45:19.367174 | orchestrator | Wednesday 04 February 2026 03:45:11 +0000 (0:00:00.325) 0:00:08.562 **** 2026-02-04 03:45:19.367185 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.367196 | orchestrator | 2026-02-04 03:45:19.367207 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-04 03:45:19.367218 | orchestrator | Wednesday 04 February 2026 03:45:11 +0000 (0:00:00.727) 0:00:09.289 **** 2026-02-04 03:45:19.367228 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.367239 | orchestrator | 2026-02-04 03:45:19.367250 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-04 03:45:19.367261 | orchestrator | Wednesday 04 February 2026 03:45:12 +0000 (0:00:00.252) 0:00:09.541 **** 2026-02-04 03:45:19.367272 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.367282 | orchestrator | 2026-02-04 03:45:19.367293 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:45:19.367343 | orchestrator | Wednesday 04 February 2026 03:45:12 +0000 (0:00:00.288) 0:00:09.830 **** 2026-02-04 03:45:19.367354 | orchestrator | 2026-02-04 03:45:19.367366 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:45:19.367376 | orchestrator | Wednesday 04 February 2026 03:45:12 +0000 (0:00:00.072) 0:00:09.903 **** 2026-02-04 03:45:19.367387 | orchestrator | 2026-02-04 03:45:19.367398 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:45:19.367409 | orchestrator | Wednesday 04 February 2026 03:45:12 +0000 (0:00:00.068) 0:00:09.972 **** 2026-02-04 03:45:19.367420 | orchestrator | 2026-02-04 03:45:19.367430 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-04 03:45:19.367441 | orchestrator | Wednesday 04 February 2026 03:45:12 +0000 (0:00:00.073) 0:00:10.045 **** 2026-02-04 03:45:19.367452 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.367462 | orchestrator | 2026-02-04 03:45:19.367473 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-04 03:45:19.367484 | orchestrator | Wednesday 04 February 2026 03:45:12 +0000 (0:00:00.262) 0:00:10.308 **** 2026-02-04 03:45:19.367495 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.367505 | orchestrator | 2026-02-04 03:45:19.367516 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-04 03:45:19.367527 | orchestrator | Wednesday 04 February 2026 03:45:13 +0000 (0:00:00.273) 0:00:10.582 **** 2026-02-04 03:45:19.367538 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.367548 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:19.367559 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:19.367570 | orchestrator | 2026-02-04 03:45:19.367581 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-04 03:45:19.367592 | orchestrator | Wednesday 04 February 2026 03:45:13 +0000 (0:00:00.305) 0:00:10.888 **** 2026-02-04 03:45:19.367602 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.367613 | orchestrator | 2026-02-04 03:45:19.367624 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-04 03:45:19.367635 | orchestrator | Wednesday 04 February 2026 03:45:14 +0000 (0:00:00.670) 0:00:11.559 **** 2026-02-04 03:45:19.367646 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 03:45:19.367656 | orchestrator | 2026-02-04 03:45:19.367667 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-04 03:45:19.367678 | orchestrator | Wednesday 04 February 2026 03:45:15 +0000 (0:00:01.580) 0:00:13.139 **** 2026-02-04 03:45:19.367689 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.367699 | orchestrator | 2026-02-04 03:45:19.367710 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-04 03:45:19.367721 | orchestrator | Wednesday 04 February 2026 03:45:15 +0000 (0:00:00.147) 0:00:13.287 **** 2026-02-04 03:45:19.367731 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.367742 | orchestrator | 2026-02-04 03:45:19.367753 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-04 03:45:19.367764 | orchestrator | Wednesday 04 February 2026 03:45:16 +0000 (0:00:00.325) 0:00:13.612 **** 2026-02-04 03:45:19.367775 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:19.367785 | orchestrator | 2026-02-04 03:45:19.367796 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-04 03:45:19.367807 | orchestrator | Wednesday 04 February 2026 03:45:16 +0000 (0:00:00.116) 0:00:13.728 **** 2026-02-04 03:45:19.367818 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.367829 | orchestrator | 2026-02-04 03:45:19.367840 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-04 03:45:19.367851 | orchestrator | Wednesday 04 February 2026 03:45:16 +0000 (0:00:00.141) 0:00:13.870 **** 2026-02-04 03:45:19.367861 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:19.367872 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:19.367883 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:19.367900 | orchestrator | 2026-02-04 03:45:19.367911 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-04 03:45:19.367922 | orchestrator | Wednesday 04 February 2026 03:45:16 +0000 (0:00:00.315) 0:00:14.186 **** 2026-02-04 03:45:19.367933 | orchestrator | changed: [testbed-node-3] 2026-02-04 03:45:19.367944 | orchestrator | changed: [testbed-node-5] 2026-02-04 03:45:19.367955 | orchestrator | changed: [testbed-node-4] 2026-02-04 03:45:29.712710 | orchestrator | 2026-02-04 03:45:29.712810 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-04 03:45:29.712820 | orchestrator | Wednesday 04 February 2026 03:45:19 +0000 (0:00:02.475) 0:00:16.662 **** 2026-02-04 03:45:29.712827 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:29.712833 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:29.712838 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:29.712844 | orchestrator | 2026-02-04 03:45:29.712850 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-04 03:45:29.712855 | orchestrator | Wednesday 04 February 2026 03:45:19 +0000 (0:00:00.330) 0:00:16.993 **** 2026-02-04 03:45:29.712860 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:29.712866 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:29.712871 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:29.712876 | orchestrator | 2026-02-04 03:45:29.712881 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-04 03:45:29.712886 | orchestrator | Wednesday 04 February 2026 03:45:20 +0000 (0:00:00.540) 0:00:17.533 **** 2026-02-04 03:45:29.712892 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:29.712898 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:29.712903 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:29.712908 | orchestrator | 2026-02-04 03:45:29.712913 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-04 03:45:29.712919 | orchestrator | Wednesday 04 February 2026 03:45:20 +0000 (0:00:00.315) 0:00:17.849 **** 2026-02-04 03:45:29.712924 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:29.712929 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:29.712934 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:29.712939 | orchestrator | 2026-02-04 03:45:29.712944 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-04 03:45:29.712952 | orchestrator | Wednesday 04 February 2026 03:45:21 +0000 (0:00:00.557) 0:00:18.406 **** 2026-02-04 03:45:29.712958 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:29.712963 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:29.712968 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:29.712974 | orchestrator | 2026-02-04 03:45:29.712979 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-04 03:45:29.712984 | orchestrator | Wednesday 04 February 2026 03:45:21 +0000 (0:00:00.302) 0:00:18.708 **** 2026-02-04 03:45:29.712990 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:29.712995 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:29.713000 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:29.713005 | orchestrator | 2026-02-04 03:45:29.713010 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-04 03:45:29.713015 | orchestrator | Wednesday 04 February 2026 03:45:21 +0000 (0:00:00.312) 0:00:19.021 **** 2026-02-04 03:45:29.713021 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:29.713026 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:29.713031 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:29.713036 | orchestrator | 2026-02-04 03:45:29.713041 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-04 03:45:29.713046 | orchestrator | Wednesday 04 February 2026 03:45:22 +0000 (0:00:00.512) 0:00:19.534 **** 2026-02-04 03:45:29.713052 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:29.713057 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:29.713062 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:29.713067 | orchestrator | 2026-02-04 03:45:29.713072 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-04 03:45:29.713092 | orchestrator | Wednesday 04 February 2026 03:45:22 +0000 (0:00:00.769) 0:00:20.303 **** 2026-02-04 03:45:29.713098 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:29.713103 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:29.713108 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:29.713113 | orchestrator | 2026-02-04 03:45:29.713118 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-04 03:45:29.713124 | orchestrator | Wednesday 04 February 2026 03:45:23 +0000 (0:00:00.317) 0:00:20.621 **** 2026-02-04 03:45:29.713129 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:29.713134 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:45:29.713139 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:45:29.713154 | orchestrator | 2026-02-04 03:45:29.713159 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-04 03:45:29.713165 | orchestrator | Wednesday 04 February 2026 03:45:23 +0000 (0:00:00.314) 0:00:20.935 **** 2026-02-04 03:45:29.713170 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:45:29.713175 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:45:29.713180 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:45:29.713185 | orchestrator | 2026-02-04 03:45:29.713196 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-04 03:45:29.713201 | orchestrator | Wednesday 04 February 2026 03:45:24 +0000 (0:00:00.528) 0:00:21.464 **** 2026-02-04 03:45:29.713207 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 03:45:29.713212 | orchestrator | 2026-02-04 03:45:29.713217 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-04 03:45:29.713222 | orchestrator | Wednesday 04 February 2026 03:45:24 +0000 (0:00:00.267) 0:00:21.731 **** 2026-02-04 03:45:29.713227 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:45:29.713233 | orchestrator | 2026-02-04 03:45:29.713238 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-04 03:45:29.713243 | orchestrator | Wednesday 04 February 2026 03:45:24 +0000 (0:00:00.250) 0:00:21.982 **** 2026-02-04 03:45:29.713248 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 03:45:29.713253 | orchestrator | 2026-02-04 03:45:29.713258 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-04 03:45:29.713263 | orchestrator | Wednesday 04 February 2026 03:45:26 +0000 (0:00:01.760) 0:00:23.743 **** 2026-02-04 03:45:29.713268 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 03:45:29.713273 | orchestrator | 2026-02-04 03:45:29.713279 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-04 03:45:29.713285 | orchestrator | Wednesday 04 February 2026 03:45:26 +0000 (0:00:00.258) 0:00:24.001 **** 2026-02-04 03:45:29.713291 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 03:45:29.713297 | orchestrator | 2026-02-04 03:45:29.713362 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:45:29.713369 | orchestrator | Wednesday 04 February 2026 03:45:26 +0000 (0:00:00.278) 0:00:24.280 **** 2026-02-04 03:45:29.713375 | orchestrator | 2026-02-04 03:45:29.713381 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:45:29.713387 | orchestrator | Wednesday 04 February 2026 03:45:27 +0000 (0:00:00.071) 0:00:24.351 **** 2026-02-04 03:45:29.713393 | orchestrator | 2026-02-04 03:45:29.713399 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-04 03:45:29.713405 | orchestrator | Wednesday 04 February 2026 03:45:27 +0000 (0:00:00.071) 0:00:24.423 **** 2026-02-04 03:45:29.713411 | orchestrator | 2026-02-04 03:45:29.713417 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-04 03:45:29.713423 | orchestrator | Wednesday 04 February 2026 03:45:27 +0000 (0:00:00.074) 0:00:24.497 **** 2026-02-04 03:45:29.713429 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-04 03:45:29.713435 | orchestrator | 2026-02-04 03:45:29.713441 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-04 03:45:29.713452 | orchestrator | Wednesday 04 February 2026 03:45:28 +0000 (0:00:01.533) 0:00:26.030 **** 2026-02-04 03:45:29.713458 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-04 03:45:29.713464 | orchestrator |  "msg": [ 2026-02-04 03:45:29.713471 | orchestrator |  "Validator run completed.", 2026-02-04 03:45:29.713477 | orchestrator |  "You can find the report file here:", 2026-02-04 03:45:29.713484 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-04T03:45:03+00:00-report.json", 2026-02-04 03:45:29.713493 | orchestrator |  "on the following host:", 2026-02-04 03:45:29.713499 | orchestrator |  "testbed-manager" 2026-02-04 03:45:29.713505 | orchestrator |  ] 2026-02-04 03:45:29.713511 | orchestrator | } 2026-02-04 03:45:29.713518 | orchestrator | 2026-02-04 03:45:29.713523 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:45:29.713531 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 03:45:29.713538 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 03:45:29.713545 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-04 03:45:29.713551 | orchestrator | 2026-02-04 03:45:29.713557 | orchestrator | 2026-02-04 03:45:29.713563 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:45:29.713568 | orchestrator | Wednesday 04 February 2026 03:45:29 +0000 (0:00:00.639) 0:00:26.669 **** 2026-02-04 03:45:29.713573 | orchestrator | =============================================================================== 2026-02-04 03:45:29.713578 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.48s 2026-02-04 03:45:29.713583 | orchestrator | Aggregate test results step one ----------------------------------------- 1.76s 2026-02-04 03:45:29.713588 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.58s 2026-02-04 03:45:29.713593 | orchestrator | Write report file ------------------------------------------------------- 1.53s 2026-02-04 03:45:29.713598 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-02-04 03:45:29.713603 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.81s 2026-02-04 03:45:29.713608 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.77s 2026-02-04 03:45:29.713613 | orchestrator | Aggregate test results step one ----------------------------------------- 0.73s 2026-02-04 03:45:29.713619 | orchestrator | Create report output directory ------------------------------------------ 0.72s 2026-02-04 03:45:29.713624 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.67s 2026-02-04 03:45:29.713629 | orchestrator | Print report file information ------------------------------------------- 0.64s 2026-02-04 03:45:29.713634 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.56s 2026-02-04 03:45:29.713639 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.54s 2026-02-04 03:45:29.713644 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.54s 2026-02-04 03:45:29.713649 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.53s 2026-02-04 03:45:29.713654 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.51s 2026-02-04 03:45:29.713659 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2026-02-04 03:45:29.713664 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2026-02-04 03:45:29.713669 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.49s 2026-02-04 03:45:29.713674 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.36s 2026-02-04 03:45:30.040963 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-04 03:45:30.051636 | orchestrator | + set -e 2026-02-04 03:45:30.051736 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 03:45:30.051752 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 03:45:30.051763 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 03:45:30.051774 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 03:45:30.051785 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 03:45:30.051796 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 03:45:30.051808 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 03:45:30.051819 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 03:45:30.051831 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 03:45:30.051841 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 03:45:30.051852 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 03:45:30.051863 | orchestrator | ++ export ARA=false 2026-02-04 03:45:30.051874 | orchestrator | ++ ARA=false 2026-02-04 03:45:30.051885 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 03:45:30.051896 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 03:45:30.051906 | orchestrator | ++ export TEMPEST=false 2026-02-04 03:45:30.051917 | orchestrator | ++ TEMPEST=false 2026-02-04 03:45:30.051928 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 03:45:30.051938 | orchestrator | ++ IS_ZUUL=true 2026-02-04 03:45:30.051957 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:45:30.051977 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:45:30.051997 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 03:45:30.052015 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 03:45:30.052034 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 03:45:30.052053 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 03:45:30.052073 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 03:45:30.052093 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 03:45:30.052111 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 03:45:30.052131 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 03:45:30.052143 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-04 03:45:30.052153 | orchestrator | + source /etc/os-release 2026-02-04 03:45:30.052164 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2026-02-04 03:45:30.052176 | orchestrator | ++ NAME=Ubuntu 2026-02-04 03:45:30.052186 | orchestrator | ++ VERSION_ID=24.04 2026-02-04 03:45:30.052197 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2026-02-04 03:45:30.052220 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-04 03:45:30.052231 | orchestrator | ++ ID=ubuntu 2026-02-04 03:45:30.052242 | orchestrator | ++ ID_LIKE=debian 2026-02-04 03:45:30.052253 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-04 03:45:30.052264 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-04 03:45:30.052278 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-04 03:45:30.052291 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-04 03:45:30.052334 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-04 03:45:30.052347 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-04 03:45:30.052360 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-04 03:45:30.052374 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-04 03:45:30.052388 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-04 03:45:30.069773 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-04 03:45:53.810952 | orchestrator | 2026-02-04 03:45:53.811068 | orchestrator | # Status of Elasticsearch 2026-02-04 03:45:53.811085 | orchestrator | 2026-02-04 03:45:53.811098 | orchestrator | + pushd /opt/configuration/contrib 2026-02-04 03:45:53.811111 | orchestrator | + echo 2026-02-04 03:45:53.811131 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-04 03:45:53.811150 | orchestrator | + echo 2026-02-04 03:45:53.811170 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-04 03:45:54.000806 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-04 03:45:54.001122 | orchestrator | 2026-02-04 03:45:54.001148 | orchestrator | + echo 2026-02-04 03:45:54.001161 | orchestrator | + echo '# Status of MariaDB' 2026-02-04 03:45:54.001173 | orchestrator | # Status of MariaDB 2026-02-04 03:45:54.001212 | orchestrator | 2026-02-04 03:45:54.001224 | orchestrator | + echo 2026-02-04 03:45:54.002286 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-04 03:45:54.055140 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 03:45:54.055251 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-04 03:45:54.055275 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-04 03:45:54.055296 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-04 03:45:54.125980 | orchestrator | Reading package lists... 2026-02-04 03:45:54.474929 | orchestrator | Building dependency tree... 2026-02-04 03:45:54.476682 | orchestrator | Reading state information... 2026-02-04 03:45:54.867196 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-04 03:45:54.867367 | orchestrator | bc set to manually installed. 2026-02-04 03:45:54.867387 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-04 03:45:55.561131 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-04 03:45:55.561922 | orchestrator | 2026-02-04 03:45:55.561964 | orchestrator | # Status of Prometheus 2026-02-04 03:45:55.561979 | orchestrator | 2026-02-04 03:45:55.561993 | orchestrator | + echo 2026-02-04 03:45:55.562006 | orchestrator | + echo '# Status of Prometheus' 2026-02-04 03:45:55.562077 | orchestrator | + echo 2026-02-04 03:45:55.562094 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-04 03:45:55.634065 | orchestrator | Unauthorized 2026-02-04 03:45:55.637155 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-04 03:45:55.696227 | orchestrator | Unauthorized 2026-02-04 03:45:55.699502 | orchestrator | 2026-02-04 03:45:55.699547 | orchestrator | # Status of RabbitMQ 2026-02-04 03:45:55.699561 | orchestrator | 2026-02-04 03:45:55.699573 | orchestrator | + echo 2026-02-04 03:45:55.699586 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-04 03:45:55.699606 | orchestrator | + echo 2026-02-04 03:45:55.700223 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-04 03:45:55.761160 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 03:45:55.761284 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-04 03:45:55.761312 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-04 03:45:56.308127 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-04 03:45:56.317863 | orchestrator | 2026-02-04 03:45:56.317905 | orchestrator | # Status of Redis 2026-02-04 03:45:56.317916 | orchestrator | 2026-02-04 03:45:56.317926 | orchestrator | + echo 2026-02-04 03:45:56.317935 | orchestrator | + echo '# Status of Redis' 2026-02-04 03:45:56.317945 | orchestrator | + echo 2026-02-04 03:45:56.317956 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-04 03:45:56.325868 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001831s;;;0.000000;10.000000 2026-02-04 03:45:56.325925 | orchestrator | + popd 2026-02-04 03:45:56.326136 | orchestrator | 2026-02-04 03:45:56.326154 | orchestrator | # Create backup of MariaDB database 2026-02-04 03:45:56.326165 | orchestrator | 2026-02-04 03:45:56.326174 | orchestrator | + echo 2026-02-04 03:45:56.326184 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-04 03:45:56.326193 | orchestrator | + echo 2026-02-04 03:45:56.326204 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-04 03:45:58.398870 | orchestrator | 2026-02-04 03:45:58 | INFO  | Task 32af368f-2e0c-41f4-8d80-9d37a295d338 (mariadb_backup) was prepared for execution. 2026-02-04 03:45:58.398946 | orchestrator | 2026-02-04 03:45:58 | INFO  | It takes a moment until task 32af368f-2e0c-41f4-8d80-9d37a295d338 (mariadb_backup) has been started and output is visible here. 2026-02-04 03:48:43.486115 | orchestrator | 2026-02-04 03:48:43.486232 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 03:48:43.486251 | orchestrator | 2026-02-04 03:48:43.486264 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 03:48:43.486277 | orchestrator | Wednesday 04 February 2026 03:46:02 +0000 (0:00:00.173) 0:00:00.173 **** 2026-02-04 03:48:43.486289 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:48:43.486300 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:48:43.486312 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:48:43.486323 | orchestrator | 2026-02-04 03:48:43.486334 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 03:48:43.486371 | orchestrator | Wednesday 04 February 2026 03:46:02 +0000 (0:00:00.362) 0:00:00.535 **** 2026-02-04 03:48:43.486382 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-04 03:48:43.486394 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-04 03:48:43.486405 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-04 03:48:43.486416 | orchestrator | 2026-02-04 03:48:43.486467 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-04 03:48:43.486479 | orchestrator | 2026-02-04 03:48:43.486490 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-04 03:48:43.486501 | orchestrator | Wednesday 04 February 2026 03:46:03 +0000 (0:00:00.604) 0:00:01.140 **** 2026-02-04 03:48:43.486512 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 03:48:43.486524 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 03:48:43.486535 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 03:48:43.486546 | orchestrator | 2026-02-04 03:48:43.486558 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 03:48:43.486569 | orchestrator | Wednesday 04 February 2026 03:46:03 +0000 (0:00:00.404) 0:00:01.544 **** 2026-02-04 03:48:43.486581 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 03:48:43.486594 | orchestrator | 2026-02-04 03:48:43.486605 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-04 03:48:43.486634 | orchestrator | Wednesday 04 February 2026 03:46:04 +0000 (0:00:00.561) 0:00:02.105 **** 2026-02-04 03:48:43.486647 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:48:43.486661 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:48:43.486674 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:48:43.486687 | orchestrator | 2026-02-04 03:48:43.486700 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-04 03:48:43.486713 | orchestrator | Wednesday 04 February 2026 03:46:07 +0000 (0:00:03.244) 0:00:05.350 **** 2026-02-04 03:48:43.486725 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:48:43.486740 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:48:43.486753 | orchestrator | 2026-02-04 03:48:43.486766 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2026-02-04 03:48:43.486779 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-04 03:48:43.486791 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-04 03:48:43.486804 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-04 03:48:43.486817 | orchestrator | mariadb_bootstrap_restart 2026-02-04 03:48:43.486830 | orchestrator | changed: [testbed-node-0] 2026-02-04 03:48:43.486842 | orchestrator | 2026-02-04 03:48:43.486856 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-04 03:48:43.486869 | orchestrator | skipping: no hosts matched 2026-02-04 03:48:43.486882 | orchestrator | 2026-02-04 03:48:43.486895 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-04 03:48:43.486908 | orchestrator | skipping: no hosts matched 2026-02-04 03:48:43.486920 | orchestrator | 2026-02-04 03:48:43.486933 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-04 03:48:43.486947 | orchestrator | skipping: no hosts matched 2026-02-04 03:48:43.486960 | orchestrator | 2026-02-04 03:48:43.486973 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-04 03:48:43.486984 | orchestrator | 2026-02-04 03:48:43.486995 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-04 03:48:43.487006 | orchestrator | Wednesday 04 February 2026 03:48:42 +0000 (0:02:34.584) 0:02:39.935 **** 2026-02-04 03:48:43.487017 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:48:43.487029 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:48:43.487048 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:48:43.487059 | orchestrator | 2026-02-04 03:48:43.487071 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-04 03:48:43.487082 | orchestrator | Wednesday 04 February 2026 03:48:42 +0000 (0:00:00.337) 0:02:40.273 **** 2026-02-04 03:48:43.487093 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:48:43.487104 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:48:43.487115 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:48:43.487125 | orchestrator | 2026-02-04 03:48:43.487136 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:48:43.487149 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:48:43.487161 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 03:48:43.487172 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 03:48:43.487183 | orchestrator | 2026-02-04 03:48:43.487195 | orchestrator | 2026-02-04 03:48:43.487206 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:48:43.487217 | orchestrator | Wednesday 04 February 2026 03:48:43 +0000 (0:00:00.435) 0:02:40.708 **** 2026-02-04 03:48:43.487228 | orchestrator | =============================================================================== 2026-02-04 03:48:43.487257 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 154.58s 2026-02-04 03:48:43.487269 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.24s 2026-02-04 03:48:43.487286 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2026-02-04 03:48:43.487304 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-02-04 03:48:43.487323 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.44s 2026-02-04 03:48:43.487341 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2026-02-04 03:48:43.487359 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-02-04 03:48:43.487376 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.34s 2026-02-04 03:48:43.813617 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-04 03:48:43.818520 | orchestrator | + set -e 2026-02-04 03:48:43.818564 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 03:48:43.818579 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 03:48:43.818842 | orchestrator | ++ INTERACTIVE=false 2026-02-04 03:48:43.818864 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 03:48:43.818875 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 03:48:43.818886 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-04 03:48:43.819223 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-04 03:48:43.823513 | orchestrator | 2026-02-04 03:48:43.823540 | orchestrator | # OpenStack endpoints 2026-02-04 03:48:43.823552 | orchestrator | 2026-02-04 03:48:43.823564 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 03:48:43.823575 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 03:48:43.823586 | orchestrator | + export OS_CLOUD=admin 2026-02-04 03:48:43.823597 | orchestrator | + OS_CLOUD=admin 2026-02-04 03:48:43.823608 | orchestrator | + echo 2026-02-04 03:48:43.823619 | orchestrator | + echo '# OpenStack endpoints' 2026-02-04 03:48:43.823630 | orchestrator | + echo 2026-02-04 03:48:43.823641 | orchestrator | + openstack endpoint list 2026-02-04 03:48:47.077112 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-04 03:48:47.077243 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-04 03:48:47.077259 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-04 03:48:47.077304 | orchestrator | | 026ccc3160674125a5e3916ff64aa12a | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-04 03:48:47.077333 | orchestrator | | 034de5d3a6b94914a8c70921badedc83 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-04 03:48:47.077347 | orchestrator | | 1090fc72c4ff4caa98e09770d06ec2a5 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-04 03:48:47.077358 | orchestrator | | 1abe4d1ad6d94fe0935a78c06a35aa09 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-04 03:48:47.077369 | orchestrator | | 34dbb371fb7d49f08ad463f3ac0b966e | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-04 03:48:47.077380 | orchestrator | | 41de242b05c2475884e4d56b17814808 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-04 03:48:47.077391 | orchestrator | | 52b60119b6c449c0abac24bb638a096c | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-04 03:48:47.077402 | orchestrator | | 5a03808691b0470084cbbba9ad4cc7e0 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-04 03:48:47.077413 | orchestrator | | 5ab4a2bde8d74b58a8da8ea0399d0cfe | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-04 03:48:47.077471 | orchestrator | | 70b012a8278e4fb8a05d1a8a49d2208f | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-04 03:48:47.077482 | orchestrator | | 797e96a0d75944298fdc49b78b18d0a2 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-04 03:48:47.077493 | orchestrator | | 8dfdb397136a4caea64188d877400ea7 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-04 03:48:47.077504 | orchestrator | | 9ff6a446a9314b829d0dde3d4e4c149c | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-04 03:48:47.077515 | orchestrator | | a1714fa9617a44cab8a33c8fa3737790 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-04 03:48:47.077526 | orchestrator | | a4a5b4939d1f4e65b844b8341a186f11 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-04 03:48:47.077537 | orchestrator | | b51e34fcfa11438da4653186980237db | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-04 03:48:47.077547 | orchestrator | | b9838e8f942243ebab63a14c47a77a3a | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-04 03:48:47.077558 | orchestrator | | bdf415bd83504e9dbf99a29bffe8c9ea | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-04 03:48:47.077569 | orchestrator | | beecc39f93e94052a5b2cd80a202fcf5 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-04 03:48:47.077580 | orchestrator | | c8974d8a6ecb44faa9dbd5a9a5306918 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-04 03:48:47.077616 | orchestrator | | cd48ac908f6440709032dd5cf111d527 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-04 03:48:47.077629 | orchestrator | | cdae171d967b4ebb83beea6bdb49a108 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-04 03:48:47.077645 | orchestrator | | cf569bb9e5cf4ea2aebe81462bbab0a7 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-04 03:48:47.077659 | orchestrator | | d5710ababb794a28a58dd261df3c0847 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-04 03:48:47.077672 | orchestrator | | d78645a267124d24ba2c21fe1bdaae21 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-04 03:48:47.077685 | orchestrator | | da610643e61e40f9a0a6f9a8c9ed214e | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-04 03:48:47.077699 | orchestrator | | dc1e354d868b45909fa375c2c8f2c21b | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-04 03:48:47.077712 | orchestrator | | de4adea07fd240339e4031517374f08d | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-04 03:48:47.077724 | orchestrator | | e499ebcb024f4138946342ed1fd7d1f5 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-04 03:48:47.077737 | orchestrator | | edffbb2827fb4240863d81871e9a83f2 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-04 03:48:47.077750 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-04 03:48:47.340828 | orchestrator | 2026-02-04 03:48:47.340932 | orchestrator | # Cinder 2026-02-04 03:48:47.340949 | orchestrator | 2026-02-04 03:48:47.340962 | orchestrator | + echo 2026-02-04 03:48:47.340974 | orchestrator | + echo '# Cinder' 2026-02-04 03:48:47.340986 | orchestrator | + echo 2026-02-04 03:48:47.340998 | orchestrator | + openstack volume service list 2026-02-04 03:48:49.979716 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-04 03:48:49.979813 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-04 03:48:49.979827 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-04 03:48:49.979837 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-04T03:48:42.000000 | 2026-02-04 03:48:49.979847 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-04T03:48:41.000000 | 2026-02-04 03:48:49.979857 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-04T03:48:42.000000 | 2026-02-04 03:48:49.979867 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-04T03:48:41.000000 | 2026-02-04 03:48:49.979876 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-04T03:48:46.000000 | 2026-02-04 03:48:49.979886 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-04T03:48:46.000000 | 2026-02-04 03:48:49.979901 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-04T03:48:42.000000 | 2026-02-04 03:48:49.979918 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-04T03:48:44.000000 | 2026-02-04 03:48:49.979965 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-04T03:48:45.000000 | 2026-02-04 03:48:49.979983 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-04 03:48:50.278214 | orchestrator | 2026-02-04 03:48:50.278310 | orchestrator | # Neutron 2026-02-04 03:48:50.278326 | orchestrator | 2026-02-04 03:48:50.278339 | orchestrator | + echo 2026-02-04 03:48:50.278351 | orchestrator | + echo '# Neutron' 2026-02-04 03:48:50.278363 | orchestrator | + echo 2026-02-04 03:48:50.278375 | orchestrator | + openstack network agent list 2026-02-04 03:48:52.863728 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-04 03:48:52.863832 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-04 03:48:52.863849 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-04 03:48:52.863862 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-04 03:48:52.863873 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-04 03:48:52.863884 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-04 03:48:52.863913 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-04 03:48:52.863924 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-04 03:48:52.863935 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-04 03:48:52.863946 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-04 03:48:52.863956 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-04 03:48:52.863967 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-04 03:48:52.863978 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-04 03:48:53.162104 | orchestrator | + openstack network service provider list 2026-02-04 03:48:55.660392 | orchestrator | +---------------+------+---------+ 2026-02-04 03:48:55.660564 | orchestrator | | Service Type | Name | Default | 2026-02-04 03:48:55.660581 | orchestrator | +---------------+------+---------+ 2026-02-04 03:48:55.660593 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-04 03:48:55.660604 | orchestrator | +---------------+------+---------+ 2026-02-04 03:48:55.917986 | orchestrator | 2026-02-04 03:48:55.918135 | orchestrator | # Nova 2026-02-04 03:48:55.918152 | orchestrator | 2026-02-04 03:48:55.918165 | orchestrator | + echo 2026-02-04 03:48:55.918176 | orchestrator | + echo '# Nova' 2026-02-04 03:48:55.918188 | orchestrator | + echo 2026-02-04 03:48:55.918200 | orchestrator | + openstack compute service list 2026-02-04 03:48:59.077231 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-04 03:48:59.077323 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-04 03:48:59.077336 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-04 03:48:59.077373 | orchestrator | | f3741351-4b07-4abb-a851-c3b5f3e7ffae | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-04T03:48:55.000000 | 2026-02-04 03:48:59.077382 | orchestrator | | a9d7debd-2c1f-443e-b954-abeb748ed623 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-04T03:48:48.000000 | 2026-02-04 03:48:59.077390 | orchestrator | | 213ae5eb-c041-4a5f-a962-1759121e38dd | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-04T03:48:49.000000 | 2026-02-04 03:48:59.077398 | orchestrator | | faf94171-0b2d-45c0-a544-cf6a76e95f76 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-04T03:48:55.000000 | 2026-02-04 03:48:59.077406 | orchestrator | | 48b7858b-12e9-41ee-ad39-5440c75f7524 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-04T03:48:56.000000 | 2026-02-04 03:48:59.077414 | orchestrator | | 461b1f2f-27f9-431b-85ec-dd45061b4496 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-04T03:48:57.000000 | 2026-02-04 03:48:59.077422 | orchestrator | | f97d2a05-d2a1-431f-a27a-b8bcc1d30d6a | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-04T03:48:56.000000 | 2026-02-04 03:48:59.077493 | orchestrator | | 6da0bcec-ab99-4932-816f-5adb77c4b4bb | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-04T03:48:56.000000 | 2026-02-04 03:48:59.077503 | orchestrator | | 55dc69f5-a007-4032-a4b8-f53b165c5c56 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-04T03:48:56.000000 | 2026-02-04 03:48:59.077511 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-04 03:48:59.363992 | orchestrator | + openstack hypervisor list 2026-02-04 03:49:02.265257 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-04 03:49:02.265367 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-04 03:49:02.265384 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-04 03:49:02.265395 | orchestrator | | a16783c8-3c7b-4e6c-b853-a7f461ffab60 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-04 03:49:02.265407 | orchestrator | | 6b42b567-b944-4c8c-a095-8c87c9d0f1e6 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-04 03:49:02.265418 | orchestrator | | b236dddf-e6fd-4da7-80bc-53c2745cdc96 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-04 03:49:02.265455 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-04 03:49:02.540178 | orchestrator | 2026-02-04 03:49:02.540307 | orchestrator | # Run OpenStack test play 2026-02-04 03:49:02.540337 | orchestrator | 2026-02-04 03:49:02.540359 | orchestrator | + echo 2026-02-04 03:49:02.540379 | orchestrator | + echo '# Run OpenStack test play' 2026-02-04 03:49:02.540400 | orchestrator | + echo 2026-02-04 03:49:02.540421 | orchestrator | + osism apply --environment openstack test 2026-02-04 03:49:04.545085 | orchestrator | 2026-02-04 03:49:04 | INFO  | Trying to run play test in environment openstack 2026-02-04 03:49:14.742282 | orchestrator | 2026-02-04 03:49:14 | INFO  | Task 5b53a7ee-d172-420c-8cdc-3396a0d96a45 (test) was prepared for execution. 2026-02-04 03:49:14.742381 | orchestrator | 2026-02-04 03:49:14 | INFO  | It takes a moment until task 5b53a7ee-d172-420c-8cdc-3396a0d96a45 (test) has been started and output is visible here. 2026-02-04 03:51:57.937508 | orchestrator | 2026-02-04 03:51:57.937686 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-04 03:51:57.937706 | orchestrator | 2026-02-04 03:51:57.937719 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-04 03:51:57.937731 | orchestrator | Wednesday 04 February 2026 03:49:19 +0000 (0:00:00.074) 0:00:00.074 **** 2026-02-04 03:51:57.937748 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.937769 | orchestrator | 2026-02-04 03:51:57.937787 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-04 03:51:57.937805 | orchestrator | Wednesday 04 February 2026 03:49:22 +0000 (0:00:03.638) 0:00:03.712 **** 2026-02-04 03:51:57.937853 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.937875 | orchestrator | 2026-02-04 03:51:57.937894 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-04 03:51:57.937913 | orchestrator | Wednesday 04 February 2026 03:49:26 +0000 (0:00:04.108) 0:00:07.820 **** 2026-02-04 03:51:57.937924 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.937935 | orchestrator | 2026-02-04 03:51:57.937946 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-04 03:51:57.937958 | orchestrator | Wednesday 04 February 2026 03:49:33 +0000 (0:00:06.363) 0:00:14.183 **** 2026-02-04 03:51:57.937971 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.937983 | orchestrator | 2026-02-04 03:51:57.937996 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-04 03:51:57.938009 | orchestrator | Wednesday 04 February 2026 03:49:37 +0000 (0:00:03.940) 0:00:18.124 **** 2026-02-04 03:51:57.938086 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938100 | orchestrator | 2026-02-04 03:51:57.938114 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-04 03:51:57.938125 | orchestrator | Wednesday 04 February 2026 03:49:41 +0000 (0:00:04.031) 0:00:22.156 **** 2026-02-04 03:51:57.938143 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-04 03:51:57.938162 | orchestrator | changed: [localhost] => (item=member) 2026-02-04 03:51:57.938182 | orchestrator | changed: [localhost] => (item=creator) 2026-02-04 03:51:57.938200 | orchestrator | 2026-02-04 03:51:57.938218 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-04 03:51:57.938237 | orchestrator | Wednesday 04 February 2026 03:49:52 +0000 (0:00:11.718) 0:00:33.875 **** 2026-02-04 03:51:57.938256 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938275 | orchestrator | 2026-02-04 03:51:57.938287 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-04 03:51:57.938298 | orchestrator | Wednesday 04 February 2026 03:49:57 +0000 (0:00:04.203) 0:00:38.079 **** 2026-02-04 03:51:57.938309 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938319 | orchestrator | 2026-02-04 03:51:57.938330 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-04 03:51:57.938341 | orchestrator | Wednesday 04 February 2026 03:50:01 +0000 (0:00:04.517) 0:00:42.596 **** 2026-02-04 03:51:57.938352 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938363 | orchestrator | 2026-02-04 03:51:57.938373 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-04 03:51:57.938384 | orchestrator | Wednesday 04 February 2026 03:50:05 +0000 (0:00:04.265) 0:00:46.861 **** 2026-02-04 03:51:57.938395 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938405 | orchestrator | 2026-02-04 03:51:57.938416 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-04 03:51:57.938427 | orchestrator | Wednesday 04 February 2026 03:50:09 +0000 (0:00:03.874) 0:00:50.736 **** 2026-02-04 03:51:57.938438 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938448 | orchestrator | 2026-02-04 03:51:57.938459 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-04 03:51:57.938470 | orchestrator | Wednesday 04 February 2026 03:50:13 +0000 (0:00:03.969) 0:00:54.706 **** 2026-02-04 03:51:57.938481 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938492 | orchestrator | 2026-02-04 03:51:57.938502 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-04 03:51:57.938522 | orchestrator | Wednesday 04 February 2026 03:50:17 +0000 (0:00:03.813) 0:00:58.519 **** 2026-02-04 03:51:57.938584 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938603 | orchestrator | 2026-02-04 03:51:57.938623 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-04 03:51:57.938641 | orchestrator | Wednesday 04 February 2026 03:50:21 +0000 (0:00:04.504) 0:01:03.024 **** 2026-02-04 03:51:57.938661 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938672 | orchestrator | 2026-02-04 03:51:57.938683 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-04 03:51:57.938705 | orchestrator | Wednesday 04 February 2026 03:50:27 +0000 (0:00:05.291) 0:01:08.315 **** 2026-02-04 03:51:57.938716 | orchestrator | changed: [localhost] 2026-02-04 03:51:57.938727 | orchestrator | 2026-02-04 03:51:57.938738 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-04 03:51:57.938749 | orchestrator | 2026-02-04 03:51:57.938831 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-04 03:51:57.938844 | orchestrator | Wednesday 04 February 2026 03:50:37 +0000 (0:00:09.745) 0:01:18.060 **** 2026-02-04 03:51:57.938855 | orchestrator | ok: [localhost] 2026-02-04 03:51:57.938867 | orchestrator | 2026-02-04 03:51:57.938878 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-04 03:51:57.938894 | orchestrator | Wednesday 04 February 2026 03:50:40 +0000 (0:00:03.539) 0:01:21.600 **** 2026-02-04 03:51:57.938913 | orchestrator | skipping: [localhost] 2026-02-04 03:51:57.938931 | orchestrator | 2026-02-04 03:51:57.938951 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-04 03:51:57.938972 | orchestrator | Wednesday 04 February 2026 03:50:40 +0000 (0:00:00.043) 0:01:21.643 **** 2026-02-04 03:51:57.938992 | orchestrator | skipping: [localhost] 2026-02-04 03:51:57.939013 | orchestrator | 2026-02-04 03:51:57.939033 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-04 03:51:57.939067 | orchestrator | Wednesday 04 February 2026 03:50:40 +0000 (0:00:00.041) 0:01:21.685 **** 2026-02-04 03:51:57.939079 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-04 03:51:57.939091 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-04 03:51:57.939125 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-04 03:51:57.939137 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-04 03:51:57.939148 | orchestrator | skipping: [localhost] => (item=test)  2026-02-04 03:51:57.939159 | orchestrator | skipping: [localhost] 2026-02-04 03:51:57.939170 | orchestrator | 2026-02-04 03:51:57.939181 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-04 03:51:57.939192 | orchestrator | Wednesday 04 February 2026 03:50:40 +0000 (0:00:00.148) 0:01:21.834 **** 2026-02-04 03:51:57.939202 | orchestrator | skipping: [localhost] 2026-02-04 03:51:57.939213 | orchestrator | 2026-02-04 03:51:57.939224 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-04 03:51:57.939235 | orchestrator | Wednesday 04 February 2026 03:50:40 +0000 (0:00:00.164) 0:01:21.998 **** 2026-02-04 03:51:57.939246 | orchestrator | changed: [localhost] => (item=test) 2026-02-04 03:51:57.939257 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-04 03:51:57.939271 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-04 03:51:57.939289 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-04 03:51:57.939306 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-04 03:51:57.939324 | orchestrator | 2026-02-04 03:51:57.939341 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-04 03:51:57.939360 | orchestrator | Wednesday 04 February 2026 03:50:46 +0000 (0:00:05.405) 0:01:27.404 **** 2026-02-04 03:51:57.939379 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-04 03:51:57.939399 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-04 03:51:57.939418 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-04 03:51:57.939437 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-04 03:51:57.939456 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j368244216798.3711', 'results_file': '/ansible/.ansible_async/j368244216798.3711', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-04 03:51:57.939471 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j161469843356.3736', 'results_file': '/ansible/.ansible_async/j161469843356.3736', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-04 03:51:57.939494 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j290914304821.3761', 'results_file': '/ansible/.ansible_async/j290914304821.3761', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-04 03:51:57.939505 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j831227664690.3786', 'results_file': '/ansible/.ansible_async/j831227664690.3786', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-04 03:51:57.939517 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-04 03:51:57.939554 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j421017532140.3818', 'results_file': '/ansible/.ansible_async/j421017532140.3818', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-04 03:51:57.939566 | orchestrator | 2026-02-04 03:51:57.939577 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-04 03:51:57.939588 | orchestrator | Wednesday 04 February 2026 03:51:43 +0000 (0:00:57.423) 0:02:24.827 **** 2026-02-04 03:51:57.939599 | orchestrator | changed: [localhost] => (item=test) 2026-02-04 03:51:57.939610 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-04 03:51:57.939620 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-04 03:51:57.939631 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-04 03:51:57.939642 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-04 03:51:57.939652 | orchestrator | 2026-02-04 03:51:57.939663 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-04 03:51:57.939674 | orchestrator | Wednesday 04 February 2026 03:51:48 +0000 (0:00:04.813) 0:02:29.640 **** 2026-02-04 03:51:57.939689 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-04 03:51:57.939708 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j831921998883.3922', 'results_file': '/ansible/.ansible_async/j831921998883.3922', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-04 03:51:57.939727 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j217975666365.3954', 'results_file': '/ansible/.ansible_async/j217975666365.3954', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-04 03:51:57.939746 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j514267864715.3979', 'results_file': '/ansible/.ansible_async/j514267864715.3979', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-04 03:51:57.939790 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j624209236044.4004', 'results_file': '/ansible/.ansible_async/j624209236044.4004', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-04 03:52:37.407154 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j933477194292.4029', 'results_file': '/ansible/.ansible_async/j933477194292.4029', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-04 03:52:37.407268 | orchestrator | 2026-02-04 03:52:37.407285 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-04 03:52:37.407297 | orchestrator | Wednesday 04 February 2026 03:51:57 +0000 (0:00:09.331) 0:02:38.972 **** 2026-02-04 03:52:37.407308 | orchestrator | changed: [localhost] => (item=test) 2026-02-04 03:52:37.407319 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-04 03:52:37.407330 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-04 03:52:37.407340 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-04 03:52:37.407374 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-04 03:52:37.407384 | orchestrator | 2026-02-04 03:52:37.407395 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-04 03:52:37.407406 | orchestrator | Wednesday 04 February 2026 03:52:02 +0000 (0:00:04.368) 0:02:43.341 **** 2026-02-04 03:52:37.407495 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-04 03:52:37.407514 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j465303272968.4098', 'results_file': '/ansible/.ansible_async/j465303272968.4098', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-04 03:52:37.407525 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j517999478044.4123', 'results_file': '/ansible/.ansible_async/j517999478044.4123', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-04 03:52:37.407536 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j43161899560.4149', 'results_file': '/ansible/.ansible_async/j43161899560.4149', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-04 03:52:37.407546 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j483954328190.4175', 'results_file': '/ansible/.ansible_async/j483954328190.4175', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-04 03:52:37.407556 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j932746400034.4201', 'results_file': '/ansible/.ansible_async/j932746400034.4201', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-04 03:52:37.407566 | orchestrator | 2026-02-04 03:52:37.407576 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-04 03:52:37.407589 | orchestrator | Wednesday 04 February 2026 03:52:12 +0000 (0:00:09.963) 0:02:53.304 **** 2026-02-04 03:52:37.407606 | orchestrator | changed: [localhost] 2026-02-04 03:52:37.407623 | orchestrator | 2026-02-04 03:52:37.407638 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-04 03:52:37.407653 | orchestrator | Wednesday 04 February 2026 03:52:18 +0000 (0:00:06.552) 0:02:59.857 **** 2026-02-04 03:52:37.407663 | orchestrator | changed: [localhost] 2026-02-04 03:52:37.407673 | orchestrator | 2026-02-04 03:52:37.407682 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-04 03:52:37.407693 | orchestrator | Wednesday 04 February 2026 03:52:32 +0000 (0:00:13.336) 0:03:13.193 **** 2026-02-04 03:52:37.407706 | orchestrator | ok: [localhost] 2026-02-04 03:52:37.407718 | orchestrator | 2026-02-04 03:52:37.407730 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-04 03:52:37.407741 | orchestrator | Wednesday 04 February 2026 03:52:37 +0000 (0:00:04.957) 0:03:18.151 **** 2026-02-04 03:52:37.407752 | orchestrator | ok: [localhost] => { 2026-02-04 03:52:37.407764 | orchestrator |  "msg": "192.168.112.113" 2026-02-04 03:52:37.407776 | orchestrator | } 2026-02-04 03:52:37.407789 | orchestrator | 2026-02-04 03:52:37.407800 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:52:37.407812 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 03:52:37.407824 | orchestrator | 2026-02-04 03:52:37.407833 | orchestrator | 2026-02-04 03:52:37.407843 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:52:37.407853 | orchestrator | Wednesday 04 February 2026 03:52:37 +0000 (0:00:00.047) 0:03:18.199 **** 2026-02-04 03:52:37.407862 | orchestrator | =============================================================================== 2026-02-04 03:52:37.407872 | orchestrator | Wait for instance creation to complete --------------------------------- 57.42s 2026-02-04 03:52:37.407882 | orchestrator | Attach test volume ----------------------------------------------------- 13.34s 2026-02-04 03:52:37.407916 | orchestrator | Add member roles to user test ------------------------------------------ 11.72s 2026-02-04 03:52:37.407926 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.96s 2026-02-04 03:52:37.407936 | orchestrator | Create test router ------------------------------------------------------ 9.75s 2026-02-04 03:52:37.407946 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.33s 2026-02-04 03:52:37.407955 | orchestrator | Create test volume ------------------------------------------------------ 6.55s 2026-02-04 03:52:37.407982 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.36s 2026-02-04 03:52:37.407993 | orchestrator | Create test instances --------------------------------------------------- 5.41s 2026-02-04 03:52:37.408002 | orchestrator | Create test subnet ------------------------------------------------------ 5.29s 2026-02-04 03:52:37.408012 | orchestrator | Create floating ip address ---------------------------------------------- 4.96s 2026-02-04 03:52:37.408022 | orchestrator | Add metadata to instances ----------------------------------------------- 4.81s 2026-02-04 03:52:37.408031 | orchestrator | Create ssh security group ----------------------------------------------- 4.52s 2026-02-04 03:52:37.408041 | orchestrator | Create test network ----------------------------------------------------- 4.50s 2026-02-04 03:52:37.408050 | orchestrator | Add tag to instances ---------------------------------------------------- 4.37s 2026-02-04 03:52:37.408060 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.27s 2026-02-04 03:52:37.408069 | orchestrator | Create test server group ------------------------------------------------ 4.20s 2026-02-04 03:52:37.408079 | orchestrator | Create test-admin user -------------------------------------------------- 4.11s 2026-02-04 03:52:37.408088 | orchestrator | Create test user -------------------------------------------------------- 4.03s 2026-02-04 03:52:37.408098 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.97s 2026-02-04 03:52:37.746796 | orchestrator | + server_list 2026-02-04 03:52:37.746891 | orchestrator | + openstack --os-cloud test server list 2026-02-04 03:52:41.411017 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-04 03:52:41.411175 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-04 03:52:41.411188 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-04 03:52:41.411196 | orchestrator | | 8eec2ac2-8c0d-4687-a119-5ce225f4af0b | test-4 | ACTIVE | test=192.168.112.173, 192.168.200.47 | N/A (booted from volume) | SCS-1L-1 | 2026-02-04 03:52:41.411204 | orchestrator | | c1b3c58b-903a-4701-8077-f84cd7dd4f66 | test-3 | ACTIVE | test=192.168.112.190, 192.168.200.29 | N/A (booted from volume) | SCS-1L-1 | 2026-02-04 03:52:41.411211 | orchestrator | | 035ea2ba-1f15-4507-b77c-1d4cd9392728 | test-1 | ACTIVE | test=192.168.112.122, 192.168.200.96 | N/A (booted from volume) | SCS-1L-1 | 2026-02-04 03:52:41.411218 | orchestrator | | 87ab420c-8606-453e-b604-676bee43403b | test | ACTIVE | test=192.168.112.113, 192.168.200.155 | N/A (booted from volume) | SCS-1L-1 | 2026-02-04 03:52:41.411226 | orchestrator | | e8c98f10-36a5-4e13-a392-af348c52ad37 | test-2 | ACTIVE | test=192.168.112.156, 192.168.200.67 | N/A (booted from volume) | SCS-1L-1 | 2026-02-04 03:52:41.411233 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-04 03:52:41.692309 | orchestrator | + openstack --os-cloud test server show test 2026-02-04 03:52:45.316165 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:45.316313 | orchestrator | | Field | Value | 2026-02-04 03:52:45.316333 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:45.316352 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-04 03:52:45.316364 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-04 03:52:45.316403 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-04 03:52:45.316415 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-04 03:52:45.316426 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-04 03:52:45.316438 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-04 03:52:45.316467 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-04 03:52:45.316479 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-04 03:52:45.316499 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-04 03:52:45.316510 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-04 03:52:45.316526 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-04 03:52:45.316537 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-04 03:52:45.316549 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-04 03:52:45.316564 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-04 03:52:45.316584 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-04 03:52:45.316640 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-04T03:51:15.000000 | 2026-02-04 03:52:45.316664 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-04 03:52:45.316691 | orchestrator | | accessIPv4 | | 2026-02-04 03:52:45.316705 | orchestrator | | accessIPv6 | | 2026-02-04 03:52:45.316718 | orchestrator | | addresses | test=192.168.112.113, 192.168.200.155 | 2026-02-04 03:52:45.316737 | orchestrator | | config_drive | | 2026-02-04 03:52:45.316751 | orchestrator | | created | 2026-02-04T03:50:50Z | 2026-02-04 03:52:45.316763 | orchestrator | | description | None | 2026-02-04 03:52:45.316776 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-04 03:52:45.316789 | orchestrator | | hostId | 0078b51fc3aba536f8ff2b25ffdbf37728c23fefbd35ce7b9e65af59 | 2026-02-04 03:52:45.316802 | orchestrator | | host_status | None | 2026-02-04 03:52:45.316831 | orchestrator | | id | 87ab420c-8606-453e-b604-676bee43403b | 2026-02-04 03:52:45.316845 | orchestrator | | image | N/A (booted from volume) | 2026-02-04 03:52:45.316858 | orchestrator | | key_name | test | 2026-02-04 03:52:45.316871 | orchestrator | | locked | False | 2026-02-04 03:52:45.316884 | orchestrator | | locked_reason | None | 2026-02-04 03:52:45.316899 | orchestrator | | name | test | 2026-02-04 03:52:45.316912 | orchestrator | | pinned_availability_zone | None | 2026-02-04 03:52:45.316925 | orchestrator | | progress | 0 | 2026-02-04 03:52:45.316938 | orchestrator | | project_id | 7d6c4058d33a41d1836d8e03eaa2a165 | 2026-02-04 03:52:45.316957 | orchestrator | | properties | hostname='test' | 2026-02-04 03:52:45.316984 | orchestrator | | security_groups | name='icmp' | 2026-02-04 03:52:45.316998 | orchestrator | | | name='ssh' | 2026-02-04 03:52:45.317012 | orchestrator | | server_groups | None | 2026-02-04 03:52:45.317033 | orchestrator | | status | ACTIVE | 2026-02-04 03:52:45.317045 | orchestrator | | tags | test | 2026-02-04 03:52:45.317056 | orchestrator | | trusted_image_certificates | None | 2026-02-04 03:52:45.317068 | orchestrator | | updated | 2026-02-04T03:51:49Z | 2026-02-04 03:52:45.317078 | orchestrator | | user_id | 99215e0321394c86ad095d97ae11ecdb | 2026-02-04 03:52:45.317090 | orchestrator | | volumes_attached | delete_on_termination='True', id='db61fb2e-1ba4-4777-ae7a-811e75f2ec64' | 2026-02-04 03:52:45.317107 | orchestrator | | | delete_on_termination='False', id='bfde2fb8-73fc-4799-ac1f-b5218f990d30' | 2026-02-04 03:52:45.318943 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:45.635866 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-04 03:52:48.619590 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:48.619689 | orchestrator | | Field | Value | 2026-02-04 03:52:48.619721 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:48.619734 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-04 03:52:48.619744 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-04 03:52:48.619754 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-04 03:52:48.619764 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-04 03:52:48.619794 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-04 03:52:48.619805 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-04 03:52:48.619832 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-04 03:52:48.619843 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-04 03:52:48.619853 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-04 03:52:48.619868 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-04 03:52:48.619878 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-04 03:52:48.619889 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-04 03:52:48.619899 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-04 03:52:48.619916 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-04 03:52:48.619926 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-04 03:52:48.619936 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-04T03:51:17.000000 | 2026-02-04 03:52:48.619954 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-04 03:52:48.619964 | orchestrator | | accessIPv4 | | 2026-02-04 03:52:48.619974 | orchestrator | | accessIPv6 | | 2026-02-04 03:52:48.619989 | orchestrator | | addresses | test=192.168.112.122, 192.168.200.96 | 2026-02-04 03:52:48.620000 | orchestrator | | config_drive | | 2026-02-04 03:52:48.620010 | orchestrator | | created | 2026-02-04T03:50:50Z | 2026-02-04 03:52:48.620026 | orchestrator | | description | None | 2026-02-04 03:52:48.620036 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-04 03:52:48.620046 | orchestrator | | hostId | 0078b51fc3aba536f8ff2b25ffdbf37728c23fefbd35ce7b9e65af59 | 2026-02-04 03:52:48.620056 | orchestrator | | host_status | None | 2026-02-04 03:52:48.620072 | orchestrator | | id | 035ea2ba-1f15-4507-b77c-1d4cd9392728 | 2026-02-04 03:52:48.620083 | orchestrator | | image | N/A (booted from volume) | 2026-02-04 03:52:48.620093 | orchestrator | | key_name | test | 2026-02-04 03:52:48.620107 | orchestrator | | locked | False | 2026-02-04 03:52:48.620118 | orchestrator | | locked_reason | None | 2026-02-04 03:52:48.620135 | orchestrator | | name | test-1 | 2026-02-04 03:52:48.620147 | orchestrator | | pinned_availability_zone | None | 2026-02-04 03:52:48.620159 | orchestrator | | progress | 0 | 2026-02-04 03:52:48.620171 | orchestrator | | project_id | 7d6c4058d33a41d1836d8e03eaa2a165 | 2026-02-04 03:52:48.620182 | orchestrator | | properties | hostname='test-1' | 2026-02-04 03:52:48.620200 | orchestrator | | security_groups | name='icmp' | 2026-02-04 03:52:48.620213 | orchestrator | | | name='ssh' | 2026-02-04 03:52:48.620225 | orchestrator | | server_groups | None | 2026-02-04 03:52:48.620237 | orchestrator | | status | ACTIVE | 2026-02-04 03:52:48.620248 | orchestrator | | tags | test | 2026-02-04 03:52:48.620266 | orchestrator | | trusted_image_certificates | None | 2026-02-04 03:52:48.620278 | orchestrator | | updated | 2026-02-04T03:51:50Z | 2026-02-04 03:52:48.620290 | orchestrator | | user_id | 99215e0321394c86ad095d97ae11ecdb | 2026-02-04 03:52:48.620301 | orchestrator | | volumes_attached | delete_on_termination='True', id='48c2373b-6e7f-45d9-bcd3-68efd66fd9dc' | 2026-02-04 03:52:48.625247 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:48.911456 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-04 03:52:51.950372 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:51.950492 | orchestrator | | Field | Value | 2026-02-04 03:52:51.950540 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:51.950561 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-04 03:52:51.950605 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-04 03:52:51.950747 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-04 03:52:51.950764 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-04 03:52:51.950781 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-04 03:52:51.950796 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-04 03:52:51.950834 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-04 03:52:51.950851 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-04 03:52:51.950867 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-04 03:52:51.950891 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-04 03:52:51.950920 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-04 03:52:51.950936 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-04 03:52:51.950953 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-04 03:52:51.950970 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-04 03:52:51.950986 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-04 03:52:51.951003 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-04T03:51:16.000000 | 2026-02-04 03:52:51.951028 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-04 03:52:51.951045 | orchestrator | | accessIPv4 | | 2026-02-04 03:52:51.951061 | orchestrator | | accessIPv6 | | 2026-02-04 03:52:51.951100 | orchestrator | | addresses | test=192.168.112.156, 192.168.200.67 | 2026-02-04 03:52:51.951118 | orchestrator | | config_drive | | 2026-02-04 03:52:51.951134 | orchestrator | | created | 2026-02-04T03:50:50Z | 2026-02-04 03:52:51.951151 | orchestrator | | description | None | 2026-02-04 03:52:51.951168 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-04 03:52:51.951185 | orchestrator | | hostId | 0078b51fc3aba536f8ff2b25ffdbf37728c23fefbd35ce7b9e65af59 | 2026-02-04 03:52:51.951201 | orchestrator | | host_status | None | 2026-02-04 03:52:51.951225 | orchestrator | | id | e8c98f10-36a5-4e13-a392-af348c52ad37 | 2026-02-04 03:52:51.951242 | orchestrator | | image | N/A (booted from volume) | 2026-02-04 03:52:51.951268 | orchestrator | | key_name | test | 2026-02-04 03:52:51.951289 | orchestrator | | locked | False | 2026-02-04 03:52:51.951306 | orchestrator | | locked_reason | None | 2026-02-04 03:52:51.951323 | orchestrator | | name | test-2 | 2026-02-04 03:52:51.951360 | orchestrator | | pinned_availability_zone | None | 2026-02-04 03:52:51.951377 | orchestrator | | progress | 0 | 2026-02-04 03:52:51.951392 | orchestrator | | project_id | 7d6c4058d33a41d1836d8e03eaa2a165 | 2026-02-04 03:52:51.951408 | orchestrator | | properties | hostname='test-2' | 2026-02-04 03:52:51.951435 | orchestrator | | security_groups | name='icmp' | 2026-02-04 03:52:51.951456 | orchestrator | | | name='ssh' | 2026-02-04 03:52:51.951481 | orchestrator | | server_groups | None | 2026-02-04 03:52:51.951503 | orchestrator | | status | ACTIVE | 2026-02-04 03:52:51.951519 | orchestrator | | tags | test | 2026-02-04 03:52:51.951536 | orchestrator | | trusted_image_certificates | None | 2026-02-04 03:52:51.951552 | orchestrator | | updated | 2026-02-04T03:51:51Z | 2026-02-04 03:52:51.951568 | orchestrator | | user_id | 99215e0321394c86ad095d97ae11ecdb | 2026-02-04 03:52:51.951584 | orchestrator | | volumes_attached | delete_on_termination='True', id='a90d48cd-30a7-4cbd-bf0b-ca5d2d3fe29b' | 2026-02-04 03:52:51.951599 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:52.249429 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-04 03:52:55.236201 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:55.236400 | orchestrator | | Field | Value | 2026-02-04 03:52:55.236422 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:55.236448 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-04 03:52:55.236461 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-04 03:52:55.236472 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-04 03:52:55.236484 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-04 03:52:55.236495 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-04 03:52:55.236507 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-04 03:52:55.236548 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-04 03:52:55.236569 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-04 03:52:55.236581 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-04 03:52:55.236593 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-04 03:52:55.236609 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-04 03:52:55.236620 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-04 03:52:55.236632 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-04 03:52:55.236643 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-04 03:52:55.236664 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-04 03:52:55.236676 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-04T03:51:18.000000 | 2026-02-04 03:52:55.236702 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-04 03:52:55.236714 | orchestrator | | accessIPv4 | | 2026-02-04 03:52:55.236725 | orchestrator | | accessIPv6 | | 2026-02-04 03:52:55.236737 | orchestrator | | addresses | test=192.168.112.190, 192.168.200.29 | 2026-02-04 03:52:55.237158 | orchestrator | | config_drive | | 2026-02-04 03:52:55.237172 | orchestrator | | created | 2026-02-04T03:50:52Z | 2026-02-04 03:52:55.237184 | orchestrator | | description | None | 2026-02-04 03:52:55.237195 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-04 03:52:55.237206 | orchestrator | | hostId | 0078b51fc3aba536f8ff2b25ffdbf37728c23fefbd35ce7b9e65af59 | 2026-02-04 03:52:55.237217 | orchestrator | | host_status | None | 2026-02-04 03:52:55.237244 | orchestrator | | id | c1b3c58b-903a-4701-8077-f84cd7dd4f66 | 2026-02-04 03:52:55.237261 | orchestrator | | image | N/A (booted from volume) | 2026-02-04 03:52:55.237272 | orchestrator | | key_name | test | 2026-02-04 03:52:55.237284 | orchestrator | | locked | False | 2026-02-04 03:52:55.237295 | orchestrator | | locked_reason | None | 2026-02-04 03:52:55.237306 | orchestrator | | name | test-3 | 2026-02-04 03:52:55.237318 | orchestrator | | pinned_availability_zone | None | 2026-02-04 03:52:55.237380 | orchestrator | | progress | 0 | 2026-02-04 03:52:55.237393 | orchestrator | | project_id | 7d6c4058d33a41d1836d8e03eaa2a165 | 2026-02-04 03:52:55.237411 | orchestrator | | properties | hostname='test-3' | 2026-02-04 03:52:55.237432 | orchestrator | | security_groups | name='icmp' | 2026-02-04 03:52:55.237452 | orchestrator | | | name='ssh' | 2026-02-04 03:52:55.237464 | orchestrator | | server_groups | None | 2026-02-04 03:52:55.237475 | orchestrator | | status | ACTIVE | 2026-02-04 03:52:55.237487 | orchestrator | | tags | test | 2026-02-04 03:52:55.237498 | orchestrator | | trusted_image_certificates | None | 2026-02-04 03:52:55.237509 | orchestrator | | updated | 2026-02-04T03:51:51Z | 2026-02-04 03:52:55.237520 | orchestrator | | user_id | 99215e0321394c86ad095d97ae11ecdb | 2026-02-04 03:52:55.237538 | orchestrator | | volumes_attached | delete_on_termination='True', id='0dfe624a-4361-4b18-9209-ebdf873b598e' | 2026-02-04 03:52:55.241399 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:55.529385 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-04 03:52:58.567878 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:58.568010 | orchestrator | | Field | Value | 2026-02-04 03:52:58.568043 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:58.568067 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-04 03:52:58.568080 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-04 03:52:58.568091 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-04 03:52:58.568103 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-04 03:52:58.568141 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-04 03:52:58.568153 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-04 03:52:58.568183 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-04 03:52:58.568205 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-04 03:52:58.568216 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-04 03:52:58.568227 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-04 03:52:58.568238 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-04 03:52:58.568249 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-04 03:52:58.568260 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-04 03:52:58.568280 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-04 03:52:58.568292 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-04 03:52:58.568303 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-04T03:51:23.000000 | 2026-02-04 03:52:58.568367 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-04 03:52:58.568397 | orchestrator | | accessIPv4 | | 2026-02-04 03:52:58.568419 | orchestrator | | accessIPv6 | | 2026-02-04 03:52:58.568442 | orchestrator | | addresses | test=192.168.112.173, 192.168.200.47 | 2026-02-04 03:52:58.568456 | orchestrator | | config_drive | | 2026-02-04 03:52:58.568470 | orchestrator | | created | 2026-02-04T03:50:52Z | 2026-02-04 03:52:58.568483 | orchestrator | | description | None | 2026-02-04 03:52:58.568505 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-04 03:52:58.568520 | orchestrator | | hostId | 0078b51fc3aba536f8ff2b25ffdbf37728c23fefbd35ce7b9e65af59 | 2026-02-04 03:52:58.568533 | orchestrator | | host_status | None | 2026-02-04 03:52:58.568555 | orchestrator | | id | 8eec2ac2-8c0d-4687-a119-5ce225f4af0b | 2026-02-04 03:52:58.568573 | orchestrator | | image | N/A (booted from volume) | 2026-02-04 03:52:58.568587 | orchestrator | | key_name | test | 2026-02-04 03:52:58.568600 | orchestrator | | locked | False | 2026-02-04 03:52:58.568613 | orchestrator | | locked_reason | None | 2026-02-04 03:52:58.568627 | orchestrator | | name | test-4 | 2026-02-04 03:52:58.568655 | orchestrator | | pinned_availability_zone | None | 2026-02-04 03:52:58.568668 | orchestrator | | progress | 0 | 2026-02-04 03:52:58.568681 | orchestrator | | project_id | 7d6c4058d33a41d1836d8e03eaa2a165 | 2026-02-04 03:52:58.568699 | orchestrator | | properties | hostname='test-4' | 2026-02-04 03:52:58.568729 | orchestrator | | security_groups | name='icmp' | 2026-02-04 03:52:58.568756 | orchestrator | | | name='ssh' | 2026-02-04 03:52:58.568777 | orchestrator | | server_groups | None | 2026-02-04 03:52:58.568797 | orchestrator | | status | ACTIVE | 2026-02-04 03:52:58.568817 | orchestrator | | tags | test | 2026-02-04 03:52:58.568840 | orchestrator | | trusted_image_certificates | None | 2026-02-04 03:52:58.568852 | orchestrator | | updated | 2026-02-04T03:51:52Z | 2026-02-04 03:52:58.568864 | orchestrator | | user_id | 99215e0321394c86ad095d97ae11ecdb | 2026-02-04 03:52:58.568875 | orchestrator | | volumes_attached | delete_on_termination='True', id='a20e204f-1818-43c4-a9d9-531ed8b24748' | 2026-02-04 03:52:58.573933 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-04 03:52:58.860527 | orchestrator | + server_ping 2026-02-04 03:52:58.861073 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-04 03:52:58.861108 | orchestrator | ++ tr -d '\r' 2026-02-04 03:53:01.603819 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-04 03:53:01.603924 | orchestrator | + ping -c3 192.168.112.156 2026-02-04 03:53:01.616140 | orchestrator | PING 192.168.112.156 (192.168.112.156) 56(84) bytes of data. 2026-02-04 03:53:01.616224 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=1 ttl=63 time=5.21 ms 2026-02-04 03:53:02.613914 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=2 ttl=63 time=2.21 ms 2026-02-04 03:53:03.614964 | orchestrator | 64 bytes from 192.168.112.156: icmp_seq=3 ttl=63 time=1.63 ms 2026-02-04 03:53:03.615045 | orchestrator | 2026-02-04 03:53:03.615054 | orchestrator | --- 192.168.112.156 ping statistics --- 2026-02-04 03:53:03.615063 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-04 03:53:03.615070 | orchestrator | rtt min/avg/max/mdev = 1.632/3.019/5.214/1.569 ms 2026-02-04 03:53:03.616138 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-04 03:53:03.616231 | orchestrator | + ping -c3 192.168.112.173 2026-02-04 03:53:03.628177 | orchestrator | PING 192.168.112.173 (192.168.112.173) 56(84) bytes of data. 2026-02-04 03:53:03.628246 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=1 ttl=63 time=7.36 ms 2026-02-04 03:53:04.625084 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=2 ttl=63 time=2.26 ms 2026-02-04 03:53:05.626843 | orchestrator | 64 bytes from 192.168.112.173: icmp_seq=3 ttl=63 time=2.05 ms 2026-02-04 03:53:05.626948 | orchestrator | 2026-02-04 03:53:05.626966 | orchestrator | --- 192.168.112.173 ping statistics --- 2026-02-04 03:53:05.627011 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-04 03:53:05.627025 | orchestrator | rtt min/avg/max/mdev = 2.054/3.890/7.360/2.454 ms 2026-02-04 03:53:05.627249 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-04 03:53:05.627303 | orchestrator | + ping -c3 192.168.112.190 2026-02-04 03:53:05.640324 | orchestrator | PING 192.168.112.190 (192.168.112.190) 56(84) bytes of data. 2026-02-04 03:53:05.640395 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=1 ttl=63 time=7.86 ms 2026-02-04 03:53:06.636317 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=2 ttl=63 time=2.47 ms 2026-02-04 03:53:07.638576 | orchestrator | 64 bytes from 192.168.112.190: icmp_seq=3 ttl=63 time=2.22 ms 2026-02-04 03:53:07.638681 | orchestrator | 2026-02-04 03:53:07.638698 | orchestrator | --- 192.168.112.190 ping statistics --- 2026-02-04 03:53:07.638711 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-04 03:53:07.638802 | orchestrator | rtt min/avg/max/mdev = 2.223/4.182/7.859/2.601 ms 2026-02-04 03:53:07.638823 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-04 03:53:07.638835 | orchestrator | + ping -c3 192.168.112.122 2026-02-04 03:53:07.653132 | orchestrator | PING 192.168.112.122 (192.168.112.122) 56(84) bytes of data. 2026-02-04 03:53:07.653212 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=1 ttl=63 time=9.33 ms 2026-02-04 03:53:08.647700 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=2 ttl=63 time=2.71 ms 2026-02-04 03:53:09.648924 | orchestrator | 64 bytes from 192.168.112.122: icmp_seq=3 ttl=63 time=2.13 ms 2026-02-04 03:53:09.648997 | orchestrator | 2026-02-04 03:53:09.649004 | orchestrator | --- 192.168.112.122 ping statistics --- 2026-02-04 03:53:09.649010 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-04 03:53:09.649015 | orchestrator | rtt min/avg/max/mdev = 2.127/4.722/9.334/3.269 ms 2026-02-04 03:53:09.649501 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-04 03:53:09.649513 | orchestrator | + ping -c3 192.168.112.113 2026-02-04 03:53:09.663901 | orchestrator | PING 192.168.112.113 (192.168.112.113) 56(84) bytes of data. 2026-02-04 03:53:09.663967 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=1 ttl=63 time=9.36 ms 2026-02-04 03:53:10.659357 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=2 ttl=63 time=2.89 ms 2026-02-04 03:53:11.659588 | orchestrator | 64 bytes from 192.168.112.113: icmp_seq=3 ttl=63 time=1.81 ms 2026-02-04 03:53:11.659693 | orchestrator | 2026-02-04 03:53:11.659711 | orchestrator | --- 192.168.112.113 ping statistics --- 2026-02-04 03:53:11.659723 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-04 03:53:11.659735 | orchestrator | rtt min/avg/max/mdev = 1.814/4.687/9.361/3.333 ms 2026-02-04 03:53:11.660370 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-04 03:53:12.105464 | orchestrator | ok: Runtime: 0:10:21.745342 2026-02-04 03:53:12.157507 | 2026-02-04 03:53:12.157639 | TASK [Run tempest] 2026-02-04 03:53:12.692040 | orchestrator | skipping: Conditional result was False 2026-02-04 03:53:12.709663 | 2026-02-04 03:53:12.709817 | TASK [Check prometheus alert status] 2026-02-04 03:53:13.251168 | orchestrator | skipping: Conditional result was False 2026-02-04 03:53:13.262461 | 2026-02-04 03:53:13.262593 | PLAY [Upgrade testbed] 2026-02-04 03:53:13.274553 | 2026-02-04 03:53:13.274674 | TASK [Print next ceph version] 2026-02-04 03:53:13.344185 | orchestrator | ok 2026-02-04 03:53:13.355338 | 2026-02-04 03:53:13.355465 | TASK [Print next openstack version] 2026-02-04 03:53:13.427591 | orchestrator | ok 2026-02-04 03:53:13.436216 | 2026-02-04 03:53:13.436325 | TASK [Print next manager version] 2026-02-04 03:53:13.513937 | orchestrator | ok 2026-02-04 03:53:13.523243 | 2026-02-04 03:53:13.523362 | TASK [Set cloud fact (Zuul deployment)] 2026-02-04 03:53:13.583189 | orchestrator | ok 2026-02-04 03:53:13.595272 | 2026-02-04 03:53:13.595388 | TASK [Set cloud fact (local deployment)] 2026-02-04 03:53:13.620641 | orchestrator | skipping: Conditional result was False 2026-02-04 03:53:13.632646 | 2026-02-04 03:53:13.632770 | TASK [Fetch manager address] 2026-02-04 03:53:13.928963 | orchestrator | ok 2026-02-04 03:53:13.939230 | 2026-02-04 03:53:13.939366 | TASK [Set manager_host address] 2026-02-04 03:53:14.008589 | orchestrator | ok 2026-02-04 03:53:14.018788 | 2026-02-04 03:53:14.018979 | TASK [Run upgrade] 2026-02-04 03:53:14.718988 | orchestrator | + set -e 2026-02-04 03:53:14.719148 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-04 03:53:14.719168 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-04 03:53:14.719184 | orchestrator | + CEPH_VERSION=reef 2026-02-04 03:53:14.719194 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-04 03:53:14.719203 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-04 03:53:14.719219 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-04 03:53:14.727816 | orchestrator | + set -e 2026-02-04 03:53:14.727916 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 03:53:14.728334 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 03:53:14.728365 | orchestrator | ++ INTERACTIVE=false 2026-02-04 03:53:14.728374 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 03:53:14.728391 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 03:53:14.729745 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-04 03:53:14.762988 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-04 03:53:14.763481 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-04 03:53:14.797117 | orchestrator | 2026-02-04 03:53:14.797191 | orchestrator | # UPGRADE MANAGER 2026-02-04 03:53:14.797204 | orchestrator | 2026-02-04 03:53:14.797210 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-04 03:53:14.797217 | orchestrator | + echo 2026-02-04 03:53:14.797244 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-04 03:53:14.797253 | orchestrator | + echo 2026-02-04 03:53:14.797259 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-04 03:53:14.797266 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-04 03:53:14.797271 | orchestrator | + CEPH_VERSION=reef 2026-02-04 03:53:14.797277 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-04 03:53:14.797283 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-04 03:53:14.797289 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-04 03:53:14.802360 | orchestrator | + set -e 2026-02-04 03:53:14.802400 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-04 03:53:14.802407 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-04 03:53:14.810048 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-04 03:53:14.810208 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-04 03:53:14.814600 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-04 03:53:14.819639 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-04 03:53:14.831031 | orchestrator | /opt/configuration ~ 2026-02-04 03:53:14.831123 | orchestrator | + set -e 2026-02-04 03:53:14.831139 | orchestrator | + pushd /opt/configuration 2026-02-04 03:53:14.831152 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 03:53:14.831166 | orchestrator | + source /opt/venv/bin/activate 2026-02-04 03:53:14.834476 | orchestrator | ++ deactivate nondestructive 2026-02-04 03:53:14.834551 | orchestrator | ++ '[' -n '' ']' 2026-02-04 03:53:14.834578 | orchestrator | ++ '[' -n '' ']' 2026-02-04 03:53:14.835919 | orchestrator | ++ hash -r 2026-02-04 03:53:14.835954 | orchestrator | ++ '[' -n '' ']' 2026-02-04 03:53:14.835966 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-04 03:53:14.835977 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-04 03:53:14.835988 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-04 03:53:14.836003 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-04 03:53:14.836014 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-04 03:53:14.836025 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-04 03:53:14.836036 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-04 03:53:14.836049 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 03:53:14.836061 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 03:53:14.836073 | orchestrator | ++ export PATH 2026-02-04 03:53:14.836084 | orchestrator | ++ '[' -n '' ']' 2026-02-04 03:53:14.836096 | orchestrator | ++ '[' -z '' ']' 2026-02-04 03:53:14.836107 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-04 03:53:14.836118 | orchestrator | ++ PS1='(venv) ' 2026-02-04 03:53:14.836131 | orchestrator | ++ export PS1 2026-02-04 03:53:14.836143 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-04 03:53:14.836155 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-04 03:53:14.836165 | orchestrator | ++ hash -r 2026-02-04 03:53:14.836182 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-04 03:53:16.023529 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-04 03:53:16.024583 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-04 03:53:16.026100 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-04 03:53:16.028150 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-04 03:53:16.029275 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-04 03:53:16.041052 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-04 03:53:16.041273 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-04 03:53:16.042598 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-04 03:53:16.043899 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-04 03:53:16.085330 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-04 03:53:16.086632 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-04 03:53:16.088680 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-04 03:53:16.089771 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-04 03:53:16.093747 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-04 03:53:16.328200 | orchestrator | ++ which gilt 2026-02-04 03:53:16.331435 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-04 03:53:16.331528 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-04 03:53:16.563933 | orchestrator | osism.cfg-generics: 2026-02-04 03:53:16.669049 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-04 03:53:16.670563 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-04 03:53:16.671651 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-04 03:53:16.671662 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-04 03:53:17.566563 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-04 03:53:17.576283 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-04 03:53:18.013529 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-04 03:53:18.075045 | orchestrator | ~ 2026-02-04 03:53:18.075151 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 03:53:18.075164 | orchestrator | + deactivate 2026-02-04 03:53:18.075172 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-04 03:53:18.075182 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 03:53:18.075190 | orchestrator | + export PATH 2026-02-04 03:53:18.075198 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-04 03:53:18.075205 | orchestrator | + '[' -n '' ']' 2026-02-04 03:53:18.075239 | orchestrator | + hash -r 2026-02-04 03:53:18.075246 | orchestrator | + '[' -n '' ']' 2026-02-04 03:53:18.075252 | orchestrator | + unset VIRTUAL_ENV 2026-02-04 03:53:18.075259 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-04 03:53:18.075266 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-04 03:53:18.075273 | orchestrator | + unset -f deactivate 2026-02-04 03:53:18.075280 | orchestrator | + popd 2026-02-04 03:53:18.077132 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-04 03:53:18.077174 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-04 03:53:18.081845 | orchestrator | + set -e 2026-02-04 03:53:18.081921 | orchestrator | + NAMESPACE=kolla/release 2026-02-04 03:53:18.081938 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-04 03:53:18.088928 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-04 03:53:18.093904 | orchestrator | /opt/configuration ~ 2026-02-04 03:53:18.093954 | orchestrator | + set -e 2026-02-04 03:53:18.093967 | orchestrator | + pushd /opt/configuration 2026-02-04 03:53:18.093978 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 03:53:18.093989 | orchestrator | + source /opt/venv/bin/activate 2026-02-04 03:53:18.094000 | orchestrator | ++ deactivate nondestructive 2026-02-04 03:53:18.094011 | orchestrator | ++ '[' -n '' ']' 2026-02-04 03:53:18.094067 | orchestrator | ++ '[' -n '' ']' 2026-02-04 03:53:18.094083 | orchestrator | ++ hash -r 2026-02-04 03:53:18.094092 | orchestrator | ++ '[' -n '' ']' 2026-02-04 03:53:18.094102 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-04 03:53:18.094111 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-04 03:53:18.094121 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-04 03:53:18.094130 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-04 03:53:18.094140 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-04 03:53:18.094150 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-04 03:53:18.094164 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-04 03:53:18.094174 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 03:53:18.094186 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 03:53:18.094196 | orchestrator | ++ export PATH 2026-02-04 03:53:18.094206 | orchestrator | ++ '[' -n '' ']' 2026-02-04 03:53:18.094241 | orchestrator | ++ '[' -z '' ']' 2026-02-04 03:53:18.094251 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-04 03:53:18.094261 | orchestrator | ++ PS1='(venv) ' 2026-02-04 03:53:18.094270 | orchestrator | ++ export PS1 2026-02-04 03:53:18.094280 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-04 03:53:18.094297 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-04 03:53:18.094307 | orchestrator | ++ hash -r 2026-02-04 03:53:18.094317 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-04 03:53:18.607556 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-04 03:53:18.608523 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-04 03:53:18.609800 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-04 03:53:18.611395 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-04 03:53:18.612391 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-04 03:53:18.623161 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-04 03:53:18.624373 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-04 03:53:18.625700 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-04 03:53:18.626977 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-04 03:53:18.663879 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-04 03:53:18.664929 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-04 03:53:18.666919 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-04 03:53:18.668378 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-04 03:53:18.672248 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-04 03:53:18.908625 | orchestrator | ++ which gilt 2026-02-04 03:53:18.910056 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-04 03:53:18.910110 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-04 03:53:19.065695 | orchestrator | osism.cfg-generics: 2026-02-04 03:53:19.147127 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-04 03:53:19.147391 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-04 03:53:19.147664 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-04 03:53:19.147693 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-04 03:53:19.611384 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-04 03:53:19.622865 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-04 03:53:19.949806 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-04 03:53:20.009883 | orchestrator | ~ 2026-02-04 03:53:20.009975 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-04 03:53:20.009988 | orchestrator | + deactivate 2026-02-04 03:53:20.010055 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-04 03:53:20.010070 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-04 03:53:20.010079 | orchestrator | + export PATH 2026-02-04 03:53:20.010089 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-04 03:53:20.010097 | orchestrator | + '[' -n '' ']' 2026-02-04 03:53:20.010105 | orchestrator | + hash -r 2026-02-04 03:53:20.010113 | orchestrator | + '[' -n '' ']' 2026-02-04 03:53:20.010122 | orchestrator | + unset VIRTUAL_ENV 2026-02-04 03:53:20.010130 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-04 03:53:20.010139 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-04 03:53:20.010147 | orchestrator | + unset -f deactivate 2026-02-04 03:53:20.010155 | orchestrator | + popd 2026-02-04 03:53:20.011868 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-04 03:53:20.066978 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-04 03:53:20.067593 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-04 03:53:20.179421 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 03:53:20.179487 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-04 03:53:20.184314 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-04 03:53:20.192547 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-04 03:53:20.265575 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-04 03:53:20.266426 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-04 03:53:20.366443 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-04 03:53:20.366533 | orchestrator | ++ echo true 2026-02-04 03:53:20.367159 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-04 03:53:20.370250 | orchestrator | +++ semver 2024.2 2024.2 2026-02-04 03:53:20.422231 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-04 03:53:20.422685 | orchestrator | +++ semver 2024.2 2025.1 2026-02-04 03:53:20.469512 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-04 03:53:20.469611 | orchestrator | ++ echo false 2026-02-04 03:53:20.469938 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-04 03:53:20.469967 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-04 03:53:20.470058 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-04 03:53:20.470085 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-04 03:53:20.470272 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-04 03:53:20.474697 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-04 03:53:20.474760 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-04 03:53:20.490076 | orchestrator | export RABBITMQ3TO4=true 2026-02-04 03:53:20.493684 | orchestrator | + osism update manager 2026-02-04 03:53:26.303708 | orchestrator | Collecting uv 2026-02-04 03:53:26.385115 | orchestrator | Downloading uv-0.9.29-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-04 03:53:26.402296 | orchestrator | Downloading uv-0.9.29-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (22.8 MB) 2026-02-04 03:53:27.111486 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 22.8/22.8 MB 34.9 MB/s eta 0:00:00 2026-02-04 03:53:27.165749 | orchestrator | Installing collected packages: uv 2026-02-04 03:53:27.605149 | orchestrator | Successfully installed uv-0.9.29 2026-02-04 03:53:28.259378 | orchestrator | Resolved 11 packages in 372ms 2026-02-04 03:53:28.297325 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-04 03:53:28.297607 | orchestrator | Downloading cryptography (4.2MiB) 2026-02-04 03:53:28.297637 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-04 03:53:28.297655 | orchestrator | Downloading ansible (54.5MiB) 2026-02-04 03:53:28.685481 | orchestrator | Downloaded netaddr 2026-02-04 03:53:28.800728 | orchestrator | Downloaded cryptography 2026-02-04 03:53:28.885069 | orchestrator | Downloaded ansible-core 2026-02-04 03:53:35.397775 | orchestrator | Downloaded ansible 2026-02-04 03:53:35.398395 | orchestrator | Prepared 11 packages in 7.13s 2026-02-04 03:53:35.998882 | orchestrator | Installed 11 packages in 599ms 2026-02-04 03:53:35.998978 | orchestrator | + ansible==11.11.0 2026-02-04 03:53:35.998993 | orchestrator | + ansible-core==2.18.13 2026-02-04 03:53:35.999005 | orchestrator | + cffi==2.0.0 2026-02-04 03:53:35.999017 | orchestrator | + cryptography==46.0.4 2026-02-04 03:53:35.999029 | orchestrator | + jinja2==3.1.6 2026-02-04 03:53:35.999040 | orchestrator | + markupsafe==3.0.3 2026-02-04 03:53:35.999051 | orchestrator | + netaddr==1.3.0 2026-02-04 03:53:35.999061 | orchestrator | + packaging==26.0 2026-02-04 03:53:35.999072 | orchestrator | + pycparser==3.0 2026-02-04 03:53:35.999083 | orchestrator | + pyyaml==6.0.3 2026-02-04 03:53:35.999095 | orchestrator | + resolvelib==1.0.1 2026-02-04 03:53:37.158376 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-203054kz87ctm7/tmpo9n6a5pg/ansible-collection-serviceslyb515vc'... 2026-02-04 03:53:38.569179 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-04 03:53:38.569275 | orchestrator | Already on 'main' 2026-02-04 03:53:39.064513 | orchestrator | Starting galaxy collection install process 2026-02-04 03:53:39.064602 | orchestrator | Process install dependency map 2026-02-04 03:53:39.064615 | orchestrator | Starting collection install process 2026-02-04 03:53:39.064630 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-04 03:53:39.064646 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-04 03:53:39.064661 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-04 03:53:39.586858 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-203073xb9djux8/tmpk5akf68d/ansible-playbooks-manageratz_dgup'... 2026-02-04 03:53:40.164915 | orchestrator | Already on 'main' 2026-02-04 03:53:40.165204 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-04 03:53:40.458829 | orchestrator | Starting galaxy collection install process 2026-02-04 03:53:40.458970 | orchestrator | Process install dependency map 2026-02-04 03:53:40.459018 | orchestrator | Starting collection install process 2026-02-04 03:53:40.459092 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-04 03:53:40.459209 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-04 03:53:40.459224 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-04 03:53:41.121452 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-04 03:53:41.121561 | orchestrator | -vvvv to see details 2026-02-04 03:53:41.573046 | orchestrator | 2026-02-04 03:53:41.573149 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-04 03:53:41.573163 | orchestrator | 2026-02-04 03:53:41.573172 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-04 03:53:45.719556 | orchestrator | ok: [testbed-manager] 2026-02-04 03:53:45.719665 | orchestrator | 2026-02-04 03:53:45.719683 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-04 03:53:45.792987 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 03:53:45.793057 | orchestrator | 2026-02-04 03:53:45.793102 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-04 03:53:47.642168 | orchestrator | ok: [testbed-manager] 2026-02-04 03:53:47.642272 | orchestrator | 2026-02-04 03:53:47.642289 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-04 03:53:47.705584 | orchestrator | ok: [testbed-manager] 2026-02-04 03:53:47.705673 | orchestrator | 2026-02-04 03:53:47.705687 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-04 03:53:47.785639 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-04 03:53:47.785760 | orchestrator | 2026-02-04 03:53:47.785787 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-04 03:53:52.188007 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-04 03:53:52.188143 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-04 03:53:52.188159 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-04 03:53:52.188185 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-04 03:53:52.188197 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-04 03:53:52.188208 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-04 03:53:52.188219 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-04 03:53:52.188230 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-04 03:53:52.188241 | orchestrator | 2026-02-04 03:53:52.188253 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-04 03:53:53.261429 | orchestrator | ok: [testbed-manager] 2026-02-04 03:53:53.261538 | orchestrator | 2026-02-04 03:53:53.261554 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-04 03:53:54.234321 | orchestrator | ok: [testbed-manager] 2026-02-04 03:53:54.234436 | orchestrator | 2026-02-04 03:53:54.234454 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-04 03:53:54.329856 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-04 03:53:54.329950 | orchestrator | 2026-02-04 03:53:54.329965 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-04 03:53:56.250690 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-04 03:53:56.250799 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-04 03:53:56.250815 | orchestrator | 2026-02-04 03:53:56.250827 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-04 03:53:57.243518 | orchestrator | ok: [testbed-manager] 2026-02-04 03:53:57.243632 | orchestrator | 2026-02-04 03:53:57.243647 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-04 03:53:57.313572 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:53:57.313657 | orchestrator | 2026-02-04 03:53:57.313669 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-04 03:53:57.406292 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-04 03:53:57.406389 | orchestrator | 2026-02-04 03:53:57.406403 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-04 03:53:58.322117 | orchestrator | ok: [testbed-manager] 2026-02-04 03:53:58.322197 | orchestrator | 2026-02-04 03:53:58.322207 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-04 03:53:58.384366 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-04 03:53:58.384451 | orchestrator | 2026-02-04 03:53:58.384463 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-04 03:54:00.305837 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-04 03:54:00.305948 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-04 03:54:00.305965 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:00.305978 | orchestrator | 2026-02-04 03:54:00.305996 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-04 03:54:01.240454 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:01.240561 | orchestrator | 2026-02-04 03:54:01.240594 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-04 03:54:01.297611 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:54:01.297688 | orchestrator | 2026-02-04 03:54:01.297698 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-04 03:54:01.391407 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-04 03:54:01.391529 | orchestrator | 2026-02-04 03:54:01.391554 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-04 03:54:03.097314 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:03.097416 | orchestrator | 2026-02-04 03:54:03.097434 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-04 03:54:03.664301 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:03.664413 | orchestrator | 2026-02-04 03:54:03.664436 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-04 03:54:05.519897 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-04 03:54:05.520049 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-04 03:54:05.520066 | orchestrator | 2026-02-04 03:54:05.520079 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-04 03:54:06.710607 | orchestrator | changed: [testbed-manager] 2026-02-04 03:54:06.710709 | orchestrator | 2026-02-04 03:54:06.710726 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-04 03:54:07.294769 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:07.294869 | orchestrator | 2026-02-04 03:54:07.294895 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-04 03:54:07.845070 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:07.845147 | orchestrator | 2026-02-04 03:54:07.845173 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-04 03:54:07.908918 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:54:07.909029 | orchestrator | 2026-02-04 03:54:07.909043 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-04 03:54:08.001342 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-04 03:54:08.001436 | orchestrator | 2026-02-04 03:54:08.001450 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-04 03:54:08.064654 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:08.064735 | orchestrator | 2026-02-04 03:54:08.064746 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-04 03:54:11.024056 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-04 03:54:11.024172 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-04 03:54:11.024189 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-04 03:54:11.024201 | orchestrator | 2026-02-04 03:54:11.024213 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-04 03:54:12.029993 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:12.030147 | orchestrator | 2026-02-04 03:54:12.030175 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-04 03:54:13.035252 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:13.035360 | orchestrator | 2026-02-04 03:54:13.035376 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-04 03:54:13.985575 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:13.985678 | orchestrator | 2026-02-04 03:54:13.985694 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-04 03:54:14.069434 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-04 03:54:14.069528 | orchestrator | 2026-02-04 03:54:14.069543 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-04 03:54:14.121979 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:14.122167 | orchestrator | 2026-02-04 03:54:14.122194 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-04 03:54:15.107776 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-04 03:54:15.107879 | orchestrator | 2026-02-04 03:54:15.107896 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-04 03:54:15.210603 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-04 03:54:15.210694 | orchestrator | 2026-02-04 03:54:15.210707 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-04 03:54:16.207506 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:16.207626 | orchestrator | 2026-02-04 03:54:16.207642 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-04 03:54:17.289677 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:17.289782 | orchestrator | 2026-02-04 03:54:17.289799 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-04 03:54:17.368315 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:54:17.368413 | orchestrator | 2026-02-04 03:54:17.368428 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-04 03:54:17.439454 | orchestrator | ok: [testbed-manager] 2026-02-04 03:54:17.439552 | orchestrator | 2026-02-04 03:54:17.439571 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-04 03:54:18.767625 | orchestrator | changed: [testbed-manager] 2026-02-04 03:54:18.767726 | orchestrator | 2026-02-04 03:54:18.767742 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-04 03:55:25.515732 | orchestrator | changed: [testbed-manager] 2026-02-04 03:55:25.515872 | orchestrator | 2026-02-04 03:55:25.515891 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-04 03:55:26.752941 | orchestrator | ok: [testbed-manager] 2026-02-04 03:55:26.753046 | orchestrator | 2026-02-04 03:55:26.753063 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-04 03:55:26.822190 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:55:26.822277 | orchestrator | 2026-02-04 03:55:26.822289 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-04 03:55:27.683755 | orchestrator | ok: [testbed-manager] 2026-02-04 03:55:27.683847 | orchestrator | 2026-02-04 03:55:27.683860 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-04 03:55:27.762730 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:55:27.762829 | orchestrator | 2026-02-04 03:55:27.762845 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-04 03:55:27.762858 | orchestrator | 2026-02-04 03:55:27.762869 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-04 03:55:42.835919 | orchestrator | changed: [testbed-manager] 2026-02-04 03:55:42.836075 | orchestrator | 2026-02-04 03:55:42.836093 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-04 03:56:42.903330 | orchestrator | Pausing for 60 seconds 2026-02-04 03:56:42.903480 | orchestrator | changed: [testbed-manager] 2026-02-04 03:56:42.903499 | orchestrator | 2026-02-04 03:56:42.903512 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-04 03:56:42.950339 | orchestrator | ok: [testbed-manager] 2026-02-04 03:56:42.950465 | orchestrator | 2026-02-04 03:56:42.950480 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-04 03:56:46.561752 | orchestrator | changed: [testbed-manager] 2026-02-04 03:56:46.561856 | orchestrator | 2026-02-04 03:56:46.561872 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-04 03:57:49.599077 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-04 03:57:49.599255 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-04 03:57:49.599276 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-04 03:57:49.599289 | orchestrator | changed: [testbed-manager] 2026-02-04 03:57:49.599302 | orchestrator | 2026-02-04 03:57:49.599314 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-04 03:58:01.221341 | orchestrator | changed: [testbed-manager] 2026-02-04 03:58:01.221443 | orchestrator | 2026-02-04 03:58:01.221457 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-04 03:58:01.316634 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-04 03:58:01.316745 | orchestrator | 2026-02-04 03:58:01.316757 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-04 03:58:01.316766 | orchestrator | 2026-02-04 03:58:01.316775 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-04 03:58:01.381122 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:58:01.381284 | orchestrator | 2026-02-04 03:58:01.381301 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-04 03:58:01.450112 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-04 03:58:01.450256 | orchestrator | 2026-02-04 03:58:01.450309 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-04 03:58:02.549018 | orchestrator | changed: [testbed-manager] 2026-02-04 03:58:02.549137 | orchestrator | 2026-02-04 03:58:02.549176 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-04 03:58:06.283255 | orchestrator | ok: [testbed-manager] 2026-02-04 03:58:06.283383 | orchestrator | 2026-02-04 03:58:06.283408 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-04 03:58:06.368743 | orchestrator | ok: [testbed-manager] => { 2026-02-04 03:58:06.368850 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-04 03:58:06.368868 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-04 03:58:06.368880 | orchestrator | "Checking running containers against expected versions...", 2026-02-04 03:58:06.368893 | orchestrator | "", 2026-02-04 03:58:06.368904 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-04 03:58:06.368916 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-04 03:58:06.368928 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.368939 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-04 03:58:06.368949 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.368961 | orchestrator | "", 2026-02-04 03:58:06.368972 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-04 03:58:06.368983 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-04 03:58:06.368994 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369005 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-04 03:58:06.369016 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369027 | orchestrator | "", 2026-02-04 03:58:06.369038 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-04 03:58:06.369049 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-04 03:58:06.369060 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369071 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-04 03:58:06.369081 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369092 | orchestrator | "", 2026-02-04 03:58:06.369103 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-04 03:58:06.369115 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-04 03:58:06.369125 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369136 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-04 03:58:06.369193 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369205 | orchestrator | "", 2026-02-04 03:58:06.369217 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-04 03:58:06.369230 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-04 03:58:06.369241 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369253 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-04 03:58:06.369265 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369279 | orchestrator | "", 2026-02-04 03:58:06.369294 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-04 03:58:06.369330 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369344 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369358 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369372 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369386 | orchestrator | "", 2026-02-04 03:58:06.369399 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-04 03:58:06.369414 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-04 03:58:06.369427 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369441 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-04 03:58:06.369452 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369464 | orchestrator | "", 2026-02-04 03:58:06.369475 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-04 03:58:06.369487 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-04 03:58:06.369498 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369518 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-04 03:58:06.369530 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369541 | orchestrator | "", 2026-02-04 03:58:06.369553 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-04 03:58:06.369564 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-04 03:58:06.369576 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369587 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-04 03:58:06.369599 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369610 | orchestrator | "", 2026-02-04 03:58:06.369627 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-04 03:58:06.369639 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-04 03:58:06.369651 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369662 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-04 03:58:06.369674 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369686 | orchestrator | "", 2026-02-04 03:58:06.369697 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-04 03:58:06.369708 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369720 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369731 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369743 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369755 | orchestrator | "", 2026-02-04 03:58:06.369766 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-04 03:58:06.369778 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369789 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369801 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369812 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369824 | orchestrator | "", 2026-02-04 03:58:06.369835 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-04 03:58:06.369847 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369858 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369870 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369881 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369893 | orchestrator | "", 2026-02-04 03:58:06.369905 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-04 03:58:06.369916 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369928 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.369940 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.369970 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.369982 | orchestrator | "", 2026-02-04 03:58:06.369994 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-04 03:58:06.370005 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.370082 | orchestrator | " Enabled: true", 2026-02-04 03:58:06.370097 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-04 03:58:06.370107 | orchestrator | " Status: ✅ MATCH", 2026-02-04 03:58:06.370119 | orchestrator | "", 2026-02-04 03:58:06.370129 | orchestrator | "=== Summary ===", 2026-02-04 03:58:06.370170 | orchestrator | "Errors (version mismatches): 0", 2026-02-04 03:58:06.370210 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-04 03:58:06.370229 | orchestrator | "", 2026-02-04 03:58:06.370245 | orchestrator | "✅ All running containers match expected versions!" 2026-02-04 03:58:06.370263 | orchestrator | ] 2026-02-04 03:58:06.370279 | orchestrator | } 2026-02-04 03:58:06.370297 | orchestrator | 2026-02-04 03:58:06.370318 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-04 03:58:06.427436 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:58:06.427530 | orchestrator | 2026-02-04 03:58:06.427545 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:58:06.427558 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-04 03:58:06.427570 | orchestrator | 2026-02-04 03:58:18.988586 | orchestrator | 2026-02-04 03:58:18 | INFO  | Task 059f3a3d-1f92-4166-bb95-e0729ab50ea1 (sync inventory) is running in background. Output coming soon. 2026-02-04 03:58:47.752405 | orchestrator | 2026-02-04 03:58:20 | INFO  | Starting group_vars file reorganization 2026-02-04 03:58:47.752523 | orchestrator | 2026-02-04 03:58:20 | INFO  | Moved 0 file(s) to their respective directories 2026-02-04 03:58:47.752540 | orchestrator | 2026-02-04 03:58:20 | INFO  | Group_vars file reorganization completed 2026-02-04 03:58:47.752573 | orchestrator | 2026-02-04 03:58:23 | INFO  | Starting variable preparation from inventory 2026-02-04 03:58:47.752585 | orchestrator | 2026-02-04 03:58:26 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-04 03:58:47.752597 | orchestrator | 2026-02-04 03:58:26 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-04 03:58:47.752608 | orchestrator | 2026-02-04 03:58:26 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-04 03:58:47.752619 | orchestrator | 2026-02-04 03:58:26 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-04 03:58:47.752630 | orchestrator | 2026-02-04 03:58:26 | INFO  | Variable preparation completed 2026-02-04 03:58:47.752641 | orchestrator | 2026-02-04 03:58:28 | INFO  | Starting inventory overwrite handling 2026-02-04 03:58:47.752652 | orchestrator | 2026-02-04 03:58:28 | INFO  | Handling group overwrites in 99-overwrite 2026-02-04 03:58:47.752663 | orchestrator | 2026-02-04 03:58:28 | INFO  | Removing group frr:children from 60-generic 2026-02-04 03:58:47.752674 | orchestrator | 2026-02-04 03:58:28 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-04 03:58:47.752685 | orchestrator | 2026-02-04 03:58:28 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-04 03:58:47.752696 | orchestrator | 2026-02-04 03:58:28 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-04 03:58:47.752707 | orchestrator | 2026-02-04 03:58:28 | INFO  | Handling group overwrites in 20-roles 2026-02-04 03:58:47.752718 | orchestrator | 2026-02-04 03:58:28 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-04 03:58:47.752729 | orchestrator | 2026-02-04 03:58:28 | INFO  | Removed 5 group(s) in total 2026-02-04 03:58:47.752741 | orchestrator | 2026-02-04 03:58:28 | INFO  | Inventory overwrite handling completed 2026-02-04 03:58:47.752751 | orchestrator | 2026-02-04 03:58:29 | INFO  | Starting merge of inventory files 2026-02-04 03:58:47.752762 | orchestrator | 2026-02-04 03:58:29 | INFO  | Inventory files merged successfully 2026-02-04 03:58:47.752798 | orchestrator | 2026-02-04 03:58:34 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-04 03:58:47.752818 | orchestrator | 2026-02-04 03:58:46 | INFO  | Successfully wrote ClusterShell configuration 2026-02-04 03:58:48.119914 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 03:58:48.120023 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-04 03:58:48.120135 | orchestrator | + local max_attempts=60 2026-02-04 03:58:48.120150 | orchestrator | + local name=kolla-ansible 2026-02-04 03:58:48.120161 | orchestrator | + local attempt_num=1 2026-02-04 03:58:48.120243 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-04 03:58:48.158806 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 03:58:48.158913 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-04 03:58:48.158929 | orchestrator | + local max_attempts=60 2026-02-04 03:58:48.158952 | orchestrator | + local name=osism-ansible 2026-02-04 03:58:48.158963 | orchestrator | + local attempt_num=1 2026-02-04 03:58:48.159637 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-04 03:58:48.201267 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-04 03:58:48.201380 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-04 03:58:48.398009 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-04 03:58:48.398223 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-04 03:58:48.398240 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-04 03:58:48.398253 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-04 03:58:48.398268 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-04 03:58:48.398279 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-04 03:58:48.398290 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-04 03:58:48.398301 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-04 03:58:48.398312 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 20 seconds ago 2026-02-04 03:58:48.398323 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-04 03:58:48.398333 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-04 03:58:48.398344 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-04 03:58:48.398355 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-04 03:58:48.398392 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-04 03:58:48.398404 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-04 03:58:48.398415 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-04 03:58:48.404486 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-04 03:58:48.404544 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-04 03:58:48.404556 | orchestrator | + osism apply facts 2026-02-04 03:59:00.887873 | orchestrator | 2026-02-04 03:59:00 | INFO  | Task 85972028-a450-4eba-ab91-79712e22750b (facts) was prepared for execution. 2026-02-04 03:59:00.887990 | orchestrator | 2026-02-04 03:59:00 | INFO  | It takes a moment until task 85972028-a450-4eba-ab91-79712e22750b (facts) has been started and output is visible here. 2026-02-04 03:59:19.424298 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-04 03:59:19.424444 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-04 03:59:19.424481 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-04 03:59:19.424493 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-04 03:59:19.424516 | orchestrator | 2026-02-04 03:59:19.424528 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-04 03:59:19.424539 | orchestrator | 2026-02-04 03:59:19.424550 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-04 03:59:19.424561 | orchestrator | Wednesday 04 February 2026 03:59:07 +0000 (0:00:01.818) 0:00:01.818 **** 2026-02-04 03:59:19.424572 | orchestrator | ok: [testbed-manager] 2026-02-04 03:59:19.424584 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:59:19.424594 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:59:19.424605 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:59:19.424616 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:59:19.424627 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:59:19.424637 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:59:19.424648 | orchestrator | 2026-02-04 03:59:19.424659 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-04 03:59:19.424670 | orchestrator | Wednesday 04 February 2026 03:59:09 +0000 (0:00:02.194) 0:00:04.012 **** 2026-02-04 03:59:19.424681 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:59:19.424692 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:59:19.424724 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:59:19.424736 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:59:19.424752 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:59:19.424763 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:59:19.424773 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:59:19.424784 | orchestrator | 2026-02-04 03:59:19.424796 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-04 03:59:19.424807 | orchestrator | 2026-02-04 03:59:19.424817 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-04 03:59:19.424828 | orchestrator | Wednesday 04 February 2026 03:59:11 +0000 (0:00:01.774) 0:00:05.787 **** 2026-02-04 03:59:19.424839 | orchestrator | ok: [testbed-node-0] 2026-02-04 03:59:19.424850 | orchestrator | ok: [testbed-manager] 2026-02-04 03:59:19.424861 | orchestrator | ok: [testbed-node-2] 2026-02-04 03:59:19.424871 | orchestrator | ok: [testbed-node-1] 2026-02-04 03:59:19.424905 | orchestrator | ok: [testbed-node-3] 2026-02-04 03:59:19.424917 | orchestrator | ok: [testbed-node-5] 2026-02-04 03:59:19.424927 | orchestrator | ok: [testbed-node-4] 2026-02-04 03:59:19.424938 | orchestrator | 2026-02-04 03:59:19.424949 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-04 03:59:19.424985 | orchestrator | 2026-02-04 03:59:19.424996 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-04 03:59:19.425007 | orchestrator | Wednesday 04 February 2026 03:59:17 +0000 (0:00:06.153) 0:00:11.941 **** 2026-02-04 03:59:19.425018 | orchestrator | skipping: [testbed-manager] 2026-02-04 03:59:19.425028 | orchestrator | skipping: [testbed-node-0] 2026-02-04 03:59:19.425039 | orchestrator | skipping: [testbed-node-1] 2026-02-04 03:59:19.425049 | orchestrator | skipping: [testbed-node-2] 2026-02-04 03:59:19.425060 | orchestrator | skipping: [testbed-node-3] 2026-02-04 03:59:19.425073 | orchestrator | skipping: [testbed-node-4] 2026-02-04 03:59:19.425092 | orchestrator | skipping: [testbed-node-5] 2026-02-04 03:59:19.425110 | orchestrator | 2026-02-04 03:59:19.425128 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 03:59:19.425147 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:59:19.425168 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:59:19.425180 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:59:19.425191 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:59:19.425201 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:59:19.425212 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:59:19.425223 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 03:59:19.425234 | orchestrator | 2026-02-04 03:59:19.425245 | orchestrator | 2026-02-04 03:59:19.425256 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 03:59:19.425267 | orchestrator | Wednesday 04 February 2026 03:59:18 +0000 (0:00:01.706) 0:00:13.647 **** 2026-02-04 03:59:19.425277 | orchestrator | =============================================================================== 2026-02-04 03:59:19.425288 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.15s 2026-02-04 03:59:19.425299 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.19s 2026-02-04 03:59:19.425309 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.77s 2026-02-04 03:59:19.425320 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.71s 2026-02-04 03:59:19.734310 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-04 03:59:19.846805 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 03:59:19.847562 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-04 03:59:19.891434 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-04 03:59:19.891543 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-04 03:59:19.898897 | orchestrator | + set -e 2026-02-04 03:59:19.899471 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-04 03:59:19.899506 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-04 03:59:19.909288 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-04 03:59:19.917609 | orchestrator | 2026-02-04 03:59:19.917687 | orchestrator | # UPGRADE SERVICES 2026-02-04 03:59:19.917728 | orchestrator | 2026-02-04 03:59:19.917741 | orchestrator | + set -e 2026-02-04 03:59:19.917752 | orchestrator | + echo 2026-02-04 03:59:19.917763 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-04 03:59:19.917774 | orchestrator | + echo 2026-02-04 03:59:19.917785 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 03:59:19.918280 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 03:59:19.919031 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 03:59:19.919052 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 03:59:19.919063 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 03:59:19.919074 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 03:59:19.919086 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 03:59:19.919097 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 03:59:19.919108 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 03:59:19.919119 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 03:59:19.919130 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 03:59:19.919141 | orchestrator | ++ export ARA=false 2026-02-04 03:59:19.919152 | orchestrator | ++ ARA=false 2026-02-04 03:59:19.919162 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 03:59:19.919173 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 03:59:19.919184 | orchestrator | ++ export TEMPEST=false 2026-02-04 03:59:19.919194 | orchestrator | ++ TEMPEST=false 2026-02-04 03:59:19.919205 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 03:59:19.919216 | orchestrator | ++ IS_ZUUL=true 2026-02-04 03:59:19.919227 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:59:19.919238 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:59:19.919249 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 03:59:19.919259 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 03:59:19.919270 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 03:59:19.919281 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 03:59:19.919291 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 03:59:19.919302 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 03:59:19.919313 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 03:59:19.919324 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 03:59:19.919335 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-04 03:59:19.919346 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-04 03:59:19.919376 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-04 03:59:19.919388 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-04 03:59:19.919399 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-04 03:59:19.929065 | orchestrator | + set -e 2026-02-04 03:59:19.929128 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 03:59:19.930478 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 03:59:19.930528 | orchestrator | ++ INTERACTIVE=false 2026-02-04 03:59:19.930546 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 03:59:19.930564 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 03:59:19.930583 | orchestrator | + source /opt/manager-vars.sh 2026-02-04 03:59:19.930600 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-04 03:59:19.930618 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-04 03:59:19.930636 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-04 03:59:19.930654 | orchestrator | ++ CEPH_VERSION=reef 2026-02-04 03:59:19.930672 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-04 03:59:19.930689 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-04 03:59:19.930708 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-04 03:59:19.930726 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-04 03:59:19.930746 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-04 03:59:19.930764 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-04 03:59:19.930782 | orchestrator | ++ export ARA=false 2026-02-04 03:59:19.930798 | orchestrator | ++ ARA=false 2026-02-04 03:59:19.930809 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-04 03:59:19.930819 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-04 03:59:19.930830 | orchestrator | ++ export TEMPEST=false 2026-02-04 03:59:19.930841 | orchestrator | ++ TEMPEST=false 2026-02-04 03:59:19.930851 | orchestrator | 2026-02-04 03:59:19.930862 | orchestrator | # PULL IMAGES 2026-02-04 03:59:19.930874 | orchestrator | 2026-02-04 03:59:19.930885 | orchestrator | ++ export IS_ZUUL=true 2026-02-04 03:59:19.930895 | orchestrator | ++ IS_ZUUL=true 2026-02-04 03:59:19.930906 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:59:19.930917 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.115 2026-02-04 03:59:19.930928 | orchestrator | ++ export EXTERNAL_API=false 2026-02-04 03:59:19.930938 | orchestrator | ++ EXTERNAL_API=false 2026-02-04 03:59:19.930949 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-04 03:59:19.931016 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-04 03:59:19.931028 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-04 03:59:19.931038 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-04 03:59:19.931075 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-04 03:59:19.931088 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-04 03:59:19.931101 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-04 03:59:19.931113 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-04 03:59:19.931126 | orchestrator | + echo 2026-02-04 03:59:19.931140 | orchestrator | + echo '# PULL IMAGES' 2026-02-04 03:59:19.931152 | orchestrator | + echo 2026-02-04 03:59:19.931948 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-04 03:59:20.001805 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 03:59:20.001912 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-04 03:59:22.203086 | orchestrator | 2026-02-04 03:59:22 | INFO  | Trying to run play pull-images in environment custom 2026-02-04 03:59:32.311920 | orchestrator | 2026-02-04 03:59:32 | INFO  | Task cea9b871-c64d-4495-ac25-fc963fac7bd0 (pull-images) was prepared for execution. 2026-02-04 03:59:32.312088 | orchestrator | 2026-02-04 03:59:32 | INFO  | Task cea9b871-c64d-4495-ac25-fc963fac7bd0 is running in background. No more output. Check ARA for logs. 2026-02-04 03:59:32.684532 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-04 03:59:32.695001 | orchestrator | + set -e 2026-02-04 03:59:32.695079 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 03:59:32.695094 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 03:59:32.695106 | orchestrator | ++ INTERACTIVE=false 2026-02-04 03:59:32.695117 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 03:59:32.695128 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 03:59:32.695140 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-04 03:59:32.697064 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-04 03:59:32.706420 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-04 03:59:32.706478 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-04 03:59:32.707054 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-04 03:59:32.756015 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-04 03:59:32.756107 | orchestrator | + osism apply frr 2026-02-04 03:59:45.013105 | orchestrator | 2026-02-04 03:59:45 | INFO  | Task b209d9a3-b9a0-44b1-a821-f6aed29bf27e (frr) was prepared for execution. 2026-02-04 03:59:45.013247 | orchestrator | 2026-02-04 03:59:45 | INFO  | It takes a moment until task b209d9a3-b9a0-44b1-a821-f6aed29bf27e (frr) has been started and output is visible here. 2026-02-04 04:00:18.278583 | orchestrator | 2026-02-04 04:00:18.278681 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-04 04:00:18.278693 | orchestrator | 2026-02-04 04:00:18.278702 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-04 04:00:18.278710 | orchestrator | Wednesday 04 February 2026 03:59:53 +0000 (0:00:04.297) 0:00:04.297 **** 2026-02-04 04:00:18.278718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 04:00:18.278727 | orchestrator | 2026-02-04 04:00:18.278735 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-04 04:00:18.278742 | orchestrator | Wednesday 04 February 2026 03:59:55 +0000 (0:00:01.826) 0:00:06.124 **** 2026-02-04 04:00:18.278749 | orchestrator | ok: [testbed-manager] 2026-02-04 04:00:18.278757 | orchestrator | 2026-02-04 04:00:18.278765 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-04 04:00:18.278773 | orchestrator | Wednesday 04 February 2026 03:59:58 +0000 (0:00:02.330) 0:00:08.454 **** 2026-02-04 04:00:18.278780 | orchestrator | ok: [testbed-manager] 2026-02-04 04:00:18.278787 | orchestrator | 2026-02-04 04:00:18.278795 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-04 04:00:18.278802 | orchestrator | Wednesday 04 February 2026 04:00:01 +0000 (0:00:03.064) 0:00:11.519 **** 2026-02-04 04:00:18.278809 | orchestrator | ok: [testbed-manager] 2026-02-04 04:00:18.278816 | orchestrator | 2026-02-04 04:00:18.278864 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-04 04:00:18.278872 | orchestrator | Wednesday 04 February 2026 04:00:03 +0000 (0:00:01.957) 0:00:13.477 **** 2026-02-04 04:00:18.278897 | orchestrator | ok: [testbed-manager] 2026-02-04 04:00:18.278905 | orchestrator | 2026-02-04 04:00:18.278912 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-04 04:00:18.278919 | orchestrator | Wednesday 04 February 2026 04:00:05 +0000 (0:00:01.957) 0:00:15.435 **** 2026-02-04 04:00:18.278926 | orchestrator | ok: [testbed-manager] 2026-02-04 04:00:18.278934 | orchestrator | 2026-02-04 04:00:18.278941 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-04 04:00:18.278949 | orchestrator | Wednesday 04 February 2026 04:00:07 +0000 (0:00:02.390) 0:00:17.826 **** 2026-02-04 04:00:18.278957 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:00:18.278964 | orchestrator | 2026-02-04 04:00:18.278972 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-04 04:00:18.278979 | orchestrator | Wednesday 04 February 2026 04:00:08 +0000 (0:00:01.111) 0:00:18.937 **** 2026-02-04 04:00:18.278986 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:00:18.278994 | orchestrator | 2026-02-04 04:00:18.279001 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-04 04:00:18.279008 | orchestrator | Wednesday 04 February 2026 04:00:09 +0000 (0:00:01.111) 0:00:20.048 **** 2026-02-04 04:00:18.279016 | orchestrator | ok: [testbed-manager] 2026-02-04 04:00:18.279023 | orchestrator | 2026-02-04 04:00:18.279030 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-04 04:00:18.279037 | orchestrator | Wednesday 04 February 2026 04:00:11 +0000 (0:00:02.022) 0:00:22.071 **** 2026-02-04 04:00:18.279044 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-04 04:00:18.279067 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-04 04:00:18.279076 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-04 04:00:18.279084 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-04 04:00:18.279091 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-04 04:00:18.279098 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-04 04:00:18.279106 | orchestrator | 2026-02-04 04:00:18.279113 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-04 04:00:18.279120 | orchestrator | Wednesday 04 February 2026 04:00:15 +0000 (0:00:03.574) 0:00:25.645 **** 2026-02-04 04:00:18.279127 | orchestrator | ok: [testbed-manager] 2026-02-04 04:00:18.279136 | orchestrator | 2026-02-04 04:00:18.279145 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:00:18.279154 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-04 04:00:18.279164 | orchestrator | 2026-02-04 04:00:18.279172 | orchestrator | 2026-02-04 04:00:18.279181 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:00:18.279191 | orchestrator | Wednesday 04 February 2026 04:00:17 +0000 (0:00:02.603) 0:00:28.249 **** 2026-02-04 04:00:18.279199 | orchestrator | =============================================================================== 2026-02-04 04:00:18.279208 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.57s 2026-02-04 04:00:18.279216 | orchestrator | osism.services.frr : Install frr package -------------------------------- 3.06s 2026-02-04 04:00:18.279225 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.60s 2026-02-04 04:00:18.279234 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.39s 2026-02-04 04:00:18.279242 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.33s 2026-02-04 04:00:18.279251 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 2.02s 2026-02-04 04:00:18.279261 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.96s 2026-02-04 04:00:18.279277 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.96s 2026-02-04 04:00:18.279306 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.83s 2026-02-04 04:00:18.279319 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.11s 2026-02-04 04:00:18.279330 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.11s 2026-02-04 04:00:18.591375 | orchestrator | + osism apply kubernetes 2026-02-04 04:00:20.678419 | orchestrator | 2026-02-04 04:00:20 | INFO  | Task aa235422-ce00-4798-9325-a9cdf02b7137 (kubernetes) was prepared for execution. 2026-02-04 04:00:20.678543 | orchestrator | 2026-02-04 04:00:20 | INFO  | It takes a moment until task aa235422-ce00-4798-9325-a9cdf02b7137 (kubernetes) has been started and output is visible here. 2026-02-04 04:01:04.452244 | orchestrator | 2026-02-04 04:01:04.452368 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-04 04:01:04.452390 | orchestrator | 2026-02-04 04:01:04.452409 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-04 04:01:04.452426 | orchestrator | Wednesday 04 February 2026 04:00:27 +0000 (0:00:01.982) 0:00:01.982 **** 2026-02-04 04:01:04.452441 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:01:04.452459 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:01:04.452475 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:01:04.452490 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:01:04.452507 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:01:04.452523 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:01:04.452539 | orchestrator | 2026-02-04 04:01:04.452555 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-04 04:01:04.452570 | orchestrator | Wednesday 04 February 2026 04:00:31 +0000 (0:00:04.151) 0:00:06.133 **** 2026-02-04 04:01:04.452587 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.452604 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.452621 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.452639 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.452656 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.452674 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.452691 | orchestrator | 2026-02-04 04:01:04.452709 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-04 04:01:04.452770 | orchestrator | Wednesday 04 February 2026 04:00:33 +0000 (0:00:02.007) 0:00:08.141 **** 2026-02-04 04:01:04.452793 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.452814 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.452834 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.452852 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.452872 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.452890 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.452908 | orchestrator | 2026-02-04 04:01:04.452928 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-04 04:01:04.452949 | orchestrator | Wednesday 04 February 2026 04:00:35 +0000 (0:00:02.027) 0:00:10.169 **** 2026-02-04 04:01:04.452969 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:01:04.452989 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:01:04.453010 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:01:04.453030 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:01:04.453051 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:01:04.453072 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:01:04.453092 | orchestrator | 2026-02-04 04:01:04.453113 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-04 04:01:04.453130 | orchestrator | Wednesday 04 February 2026 04:00:38 +0000 (0:00:03.244) 0:00:13.414 **** 2026-02-04 04:01:04.453146 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:01:04.453162 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:01:04.453177 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:01:04.453193 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:01:04.453238 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:01:04.453254 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:01:04.453270 | orchestrator | 2026-02-04 04:01:04.453285 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-04 04:01:04.453300 | orchestrator | Wednesday 04 February 2026 04:00:40 +0000 (0:00:02.247) 0:00:15.662 **** 2026-02-04 04:01:04.453315 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:01:04.453329 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:01:04.453344 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:01:04.453359 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:01:04.453374 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:01:04.453389 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:01:04.453403 | orchestrator | 2026-02-04 04:01:04.453419 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-04 04:01:04.453435 | orchestrator | Wednesday 04 February 2026 04:00:42 +0000 (0:00:02.009) 0:00:17.671 **** 2026-02-04 04:01:04.453449 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.453464 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.453479 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.453494 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.453509 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.453523 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.453538 | orchestrator | 2026-02-04 04:01:04.453554 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-04 04:01:04.453569 | orchestrator | Wednesday 04 February 2026 04:00:44 +0000 (0:00:01.986) 0:00:19.657 **** 2026-02-04 04:01:04.453584 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.453598 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.453613 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.453628 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.453657 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.453674 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.453690 | orchestrator | 2026-02-04 04:01:04.453706 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-04 04:01:04.453722 | orchestrator | Wednesday 04 February 2026 04:00:46 +0000 (0:00:01.909) 0:00:21.567 **** 2026-02-04 04:01:04.453766 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 04:01:04.453786 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 04:01:04.453803 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.453821 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 04:01:04.453838 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 04:01:04.453853 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.453867 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 04:01:04.453881 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 04:01:04.453895 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.453911 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 04:01:04.453927 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 04:01:04.453943 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.453985 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 04:01:04.454002 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 04:01:04.454050 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.454061 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-04 04:01:04.454073 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-04 04:01:04.454083 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.454093 | orchestrator | 2026-02-04 04:01:04.454115 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-04 04:01:04.454126 | orchestrator | Wednesday 04 February 2026 04:00:48 +0000 (0:00:02.041) 0:00:23.608 **** 2026-02-04 04:01:04.454135 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.454145 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.454155 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.454164 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.454174 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.454183 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.454193 | orchestrator | 2026-02-04 04:01:04.454202 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-04 04:01:04.454213 | orchestrator | Wednesday 04 February 2026 04:00:50 +0000 (0:00:02.090) 0:00:25.698 **** 2026-02-04 04:01:04.454223 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:01:04.454233 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:01:04.454243 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:01:04.454252 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:01:04.454262 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:01:04.454272 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:01:04.454281 | orchestrator | 2026-02-04 04:01:04.454291 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-04 04:01:04.454300 | orchestrator | Wednesday 04 February 2026 04:00:52 +0000 (0:00:02.058) 0:00:27.757 **** 2026-02-04 04:01:04.454310 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:01:04.454319 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:01:04.454329 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:01:04.454338 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:01:04.454348 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:01:04.454357 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:01:04.454367 | orchestrator | 2026-02-04 04:01:04.454376 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-04 04:01:04.454386 | orchestrator | Wednesday 04 February 2026 04:00:55 +0000 (0:00:03.077) 0:00:30.834 **** 2026-02-04 04:01:04.454396 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.454405 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.454415 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.454425 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.454434 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.454444 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.454453 | orchestrator | 2026-02-04 04:01:04.454463 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-04 04:01:04.454472 | orchestrator | Wednesday 04 February 2026 04:00:57 +0000 (0:00:01.970) 0:00:32.805 **** 2026-02-04 04:01:04.454482 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.454492 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.454501 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.454511 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.454520 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.454530 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.454540 | orchestrator | 2026-02-04 04:01:04.454549 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-04 04:01:04.454561 | orchestrator | Wednesday 04 February 2026 04:01:00 +0000 (0:00:02.151) 0:00:34.956 **** 2026-02-04 04:01:04.454571 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.454584 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.454594 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.454604 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.454613 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.454623 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.454632 | orchestrator | 2026-02-04 04:01:04.454642 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-04 04:01:04.454651 | orchestrator | Wednesday 04 February 2026 04:01:01 +0000 (0:00:01.969) 0:00:36.925 **** 2026-02-04 04:01:04.454667 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-04 04:01:04.454677 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-04 04:01:04.454687 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.454697 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-04 04:01:04.454706 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-04 04:01:04.454716 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.454726 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-04 04:01:04.454840 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-04 04:01:04.454852 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:01:04.454862 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-04 04:01:04.454872 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-04 04:01:04.454881 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:01:04.454891 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-04 04:01:04.454901 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-04 04:01:04.454910 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:01:04.454920 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-04 04:01:04.454929 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-04 04:01:04.454939 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:01:04.454948 | orchestrator | 2026-02-04 04:01:04.454958 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-04 04:01:04.454968 | orchestrator | Wednesday 04 February 2026 04:01:04 +0000 (0:00:02.006) 0:00:38.932 **** 2026-02-04 04:01:04.454978 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:01:04.454987 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:01:04.455007 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:02:41.655799 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:02:41.655917 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.655933 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.655945 | orchestrator | 2026-02-04 04:02:41.655958 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-04 04:02:41.655970 | orchestrator | Wednesday 04 February 2026 04:01:05 +0000 (0:00:01.934) 0:00:40.867 **** 2026-02-04 04:02:41.655982 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:02:41.655993 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:02:41.656004 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:02:41.656015 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:02:41.656025 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.656036 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.656047 | orchestrator | 2026-02-04 04:02:41.656058 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-04 04:02:41.656069 | orchestrator | 2026-02-04 04:02:41.656081 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-04 04:02:41.656093 | orchestrator | Wednesday 04 February 2026 04:01:08 +0000 (0:00:02.668) 0:00:43.535 **** 2026-02-04 04:02:41.656104 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.656116 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.656144 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.656156 | orchestrator | 2026-02-04 04:02:41.656172 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-04 04:02:41.656183 | orchestrator | Wednesday 04 February 2026 04:01:10 +0000 (0:00:01.723) 0:00:45.258 **** 2026-02-04 04:02:41.656194 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.656205 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.656216 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.656226 | orchestrator | 2026-02-04 04:02:41.656237 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-04 04:02:41.656248 | orchestrator | Wednesday 04 February 2026 04:01:12 +0000 (0:00:02.125) 0:00:47.384 **** 2026-02-04 04:02:41.656283 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:02:41.656320 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:02:41.656338 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:02:41.656355 | orchestrator | 2026-02-04 04:02:41.656375 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-04 04:02:41.656395 | orchestrator | Wednesday 04 February 2026 04:01:14 +0000 (0:00:02.113) 0:00:49.498 **** 2026-02-04 04:02:41.656415 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.656434 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.656449 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.656462 | orchestrator | 2026-02-04 04:02:41.656476 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-04 04:02:41.656489 | orchestrator | Wednesday 04 February 2026 04:01:16 +0000 (0:00:01.980) 0:00:51.479 **** 2026-02-04 04:02:41.656501 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:02:41.656514 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.656526 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.656539 | orchestrator | 2026-02-04 04:02:41.656552 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-04 04:02:41.656588 | orchestrator | Wednesday 04 February 2026 04:01:18 +0000 (0:00:01.517) 0:00:52.997 **** 2026-02-04 04:02:41.656601 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.656614 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.656627 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.656640 | orchestrator | 2026-02-04 04:02:41.656652 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-04 04:02:41.656665 | orchestrator | Wednesday 04 February 2026 04:01:19 +0000 (0:00:01.667) 0:00:54.665 **** 2026-02-04 04:02:41.656678 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.656691 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.656701 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.656712 | orchestrator | 2026-02-04 04:02:41.656723 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-04 04:02:41.656734 | orchestrator | Wednesday 04 February 2026 04:01:21 +0000 (0:00:02.207) 0:00:56.872 **** 2026-02-04 04:02:41.656745 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:02:41.656756 | orchestrator | 2026-02-04 04:02:41.656767 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-04 04:02:41.656778 | orchestrator | Wednesday 04 February 2026 04:01:24 +0000 (0:00:02.061) 0:00:58.933 **** 2026-02-04 04:02:41.656789 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.656799 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.656810 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.656821 | orchestrator | 2026-02-04 04:02:41.656832 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-04 04:02:41.656843 | orchestrator | Wednesday 04 February 2026 04:01:26 +0000 (0:00:02.443) 0:01:01.376 **** 2026-02-04 04:02:41.656854 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.656864 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.656875 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.656886 | orchestrator | 2026-02-04 04:02:41.656897 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-04 04:02:41.656908 | orchestrator | Wednesday 04 February 2026 04:01:28 +0000 (0:00:01.690) 0:01:03.067 **** 2026-02-04 04:02:41.656919 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.656930 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.656940 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:02:41.656951 | orchestrator | 2026-02-04 04:02:41.656962 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-04 04:02:41.656973 | orchestrator | Wednesday 04 February 2026 04:01:29 +0000 (0:00:01.841) 0:01:04.909 **** 2026-02-04 04:02:41.656983 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.656994 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.657005 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:02:41.657025 | orchestrator | 2026-02-04 04:02:41.657036 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-04 04:02:41.657047 | orchestrator | Wednesday 04 February 2026 04:01:32 +0000 (0:00:02.439) 0:01:07.348 **** 2026-02-04 04:02:41.657057 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:02:41.657068 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.657098 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.657109 | orchestrator | 2026-02-04 04:02:41.657120 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-04 04:02:41.657131 | orchestrator | Wednesday 04 February 2026 04:01:33 +0000 (0:00:01.454) 0:01:08.803 **** 2026-02-04 04:02:41.657142 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:02:41.657153 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.657164 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.657174 | orchestrator | 2026-02-04 04:02:41.657185 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-04 04:02:41.657196 | orchestrator | Wednesday 04 February 2026 04:01:35 +0000 (0:00:01.586) 0:01:10.389 **** 2026-02-04 04:02:41.657207 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:02:41.657218 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:02:41.657229 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:02:41.657240 | orchestrator | 2026-02-04 04:02:41.657250 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-04 04:02:41.657261 | orchestrator | Wednesday 04 February 2026 04:01:37 +0000 (0:00:02.198) 0:01:12.587 **** 2026-02-04 04:02:41.657272 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.657283 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.657294 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.657304 | orchestrator | 2026-02-04 04:02:41.657315 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-04 04:02:41.657326 | orchestrator | Wednesday 04 February 2026 04:01:39 +0000 (0:00:01.873) 0:01:14.460 **** 2026-02-04 04:02:41.657337 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.657348 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.657358 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.657369 | orchestrator | 2026-02-04 04:02:41.657380 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-04 04:02:41.657392 | orchestrator | Wednesday 04 February 2026 04:01:41 +0000 (0:00:01.491) 0:01:15.952 **** 2026-02-04 04:02:41.657403 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 04:02:41.657416 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 04:02:41.657427 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-04 04:02:41.657438 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 04:02:41.657449 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 04:02:41.657460 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-04 04:02:41.657471 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 04:02:41.657482 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 04:02:41.657492 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-04 04:02:41.657503 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 04:02:41.657522 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 04:02:41.657532 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-04 04:02:41.657543 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-04 04:02:41.657568 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.657580 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.657591 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.657602 | orchestrator | 2026-02-04 04:02:41.657613 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-04 04:02:41.657624 | orchestrator | Wednesday 04 February 2026 04:02:35 +0000 (0:00:54.931) 0:02:10.883 **** 2026-02-04 04:02:41.657635 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:02:41.657646 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:02:41.657657 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:02:41.657667 | orchestrator | 2026-02-04 04:02:41.657685 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-04 04:02:41.657696 | orchestrator | Wednesday 04 February 2026 04:02:37 +0000 (0:00:01.365) 0:02:12.249 **** 2026-02-04 04:02:41.657707 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:02:41.657718 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:02:41.657729 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:02:41.657740 | orchestrator | 2026-02-04 04:02:41.657751 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-04 04:02:41.657762 | orchestrator | Wednesday 04 February 2026 04:02:39 +0000 (0:00:02.120) 0:02:14.369 **** 2026-02-04 04:02:41.657773 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:02:41.657784 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:02:41.657794 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:02:41.657805 | orchestrator | 2026-02-04 04:02:41.657824 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-04 04:04:16.559778 | orchestrator | Wednesday 04 February 2026 04:02:41 +0000 (0:00:02.193) 0:02:16.563 **** 2026-02-04 04:04:16.559897 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:04:16.559914 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:04:16.559926 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:04:16.559937 | orchestrator | 2026-02-04 04:04:16.559949 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-04 04:04:16.559960 | orchestrator | Wednesday 04 February 2026 04:03:36 +0000 (0:00:54.725) 0:03:11.289 **** 2026-02-04 04:04:16.559971 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:04:16.559983 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:04:16.559994 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:04:16.560005 | orchestrator | 2026-02-04 04:04:16.560016 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-04 04:04:16.560027 | orchestrator | Wednesday 04 February 2026 04:03:38 +0000 (0:00:01.682) 0:03:12.971 **** 2026-02-04 04:04:16.560038 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:04:16.560049 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:04:16.560060 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:04:16.560071 | orchestrator | 2026-02-04 04:04:16.560081 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-04 04:04:16.560092 | orchestrator | Wednesday 04 February 2026 04:03:39 +0000 (0:00:01.638) 0:03:14.610 **** 2026-02-04 04:04:16.560103 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:04:16.560130 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:04:16.560142 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:04:16.560153 | orchestrator | 2026-02-04 04:04:16.560164 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-04 04:04:16.560174 | orchestrator | Wednesday 04 February 2026 04:03:41 +0000 (0:00:01.905) 0:03:16.516 **** 2026-02-04 04:04:16.560207 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:04:16.560218 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:04:16.560229 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:04:16.560240 | orchestrator | 2026-02-04 04:04:16.560251 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-04 04:04:16.560262 | orchestrator | Wednesday 04 February 2026 04:03:43 +0000 (0:00:01.703) 0:03:18.220 **** 2026-02-04 04:04:16.560273 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:04:16.560284 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:04:16.560294 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:04:16.560305 | orchestrator | 2026-02-04 04:04:16.560318 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-04 04:04:16.560331 | orchestrator | Wednesday 04 February 2026 04:03:44 +0000 (0:00:01.365) 0:03:19.586 **** 2026-02-04 04:04:16.560344 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:04:16.560357 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:04:16.560370 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:04:16.560383 | orchestrator | 2026-02-04 04:04:16.560396 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-04 04:04:16.560409 | orchestrator | Wednesday 04 February 2026 04:03:46 +0000 (0:00:02.062) 0:03:21.648 **** 2026-02-04 04:04:16.560445 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:04:16.560458 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:04:16.560471 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:04:16.560484 | orchestrator | 2026-02-04 04:04:16.560497 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-04 04:04:16.560510 | orchestrator | Wednesday 04 February 2026 04:03:48 +0000 (0:00:02.235) 0:03:23.884 **** 2026-02-04 04:04:16.560523 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:04:16.560537 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:04:16.560550 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:04:16.560563 | orchestrator | 2026-02-04 04:04:16.560576 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-04 04:04:16.560593 | orchestrator | Wednesday 04 February 2026 04:03:50 +0000 (0:00:01.981) 0:03:25.866 **** 2026-02-04 04:04:16.560613 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:04:16.560627 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:04:16.560642 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:04:16.560654 | orchestrator | 2026-02-04 04:04:16.560667 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-04 04:04:16.560680 | orchestrator | Wednesday 04 February 2026 04:03:52 +0000 (0:00:02.046) 0:03:27.912 **** 2026-02-04 04:04:16.560693 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:04:16.560704 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:04:16.560715 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:04:16.560726 | orchestrator | 2026-02-04 04:04:16.560737 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-04 04:04:16.560748 | orchestrator | Wednesday 04 February 2026 04:03:54 +0000 (0:00:01.577) 0:03:29.490 **** 2026-02-04 04:04:16.560758 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:04:16.560769 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:04:16.560780 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:04:16.560790 | orchestrator | 2026-02-04 04:04:16.560801 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-04 04:04:16.560812 | orchestrator | Wednesday 04 February 2026 04:03:56 +0000 (0:00:01.667) 0:03:31.157 **** 2026-02-04 04:04:16.560822 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:04:16.560833 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:04:16.560844 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:04:16.560855 | orchestrator | 2026-02-04 04:04:16.560865 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-04 04:04:16.560876 | orchestrator | Wednesday 04 February 2026 04:03:58 +0000 (0:00:02.064) 0:03:33.221 **** 2026-02-04 04:04:16.560887 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:04:16.560906 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:04:16.560916 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:04:16.560927 | orchestrator | 2026-02-04 04:04:16.560939 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-04 04:04:16.560951 | orchestrator | Wednesday 04 February 2026 04:04:00 +0000 (0:00:01.797) 0:03:35.019 **** 2026-02-04 04:04:16.560962 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 04:04:16.560973 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 04:04:16.561002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 04:04:16.561014 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 04:04:16.561024 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-04 04:04:16.561035 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 04:04:16.561047 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-04 04:04:16.561131 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 04:04:16.561144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-04 04:04:16.561189 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-04 04:04:16.561202 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 04:04:16.561213 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-04 04:04:16.561224 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 04:04:16.561235 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 04:04:16.561246 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 04:04:16.561257 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-04 04:04:16.561268 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 04:04:16.561279 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 04:04:16.561290 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-04 04:04:16.561300 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-04 04:04:16.561311 | orchestrator | 2026-02-04 04:04:16.561322 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-04 04:04:16.561333 | orchestrator | 2026-02-04 04:04:16.561344 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-04 04:04:16.561355 | orchestrator | Wednesday 04 February 2026 04:04:04 +0000 (0:00:04.326) 0:03:39.345 **** 2026-02-04 04:04:16.561366 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:04:16.561377 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:04:16.561387 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:04:16.561398 | orchestrator | 2026-02-04 04:04:16.561409 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-04 04:04:16.561450 | orchestrator | Wednesday 04 February 2026 04:04:05 +0000 (0:00:01.337) 0:03:40.683 **** 2026-02-04 04:04:16.561461 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:04:16.561472 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:04:16.561498 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:04:16.561509 | orchestrator | 2026-02-04 04:04:16.561520 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-04 04:04:16.561542 | orchestrator | Wednesday 04 February 2026 04:04:07 +0000 (0:00:01.619) 0:03:42.302 **** 2026-02-04 04:04:16.561553 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:04:16.561564 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:04:16.561574 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:04:16.561585 | orchestrator | 2026-02-04 04:04:16.561611 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-04 04:04:16.561622 | orchestrator | Wednesday 04 February 2026 04:04:08 +0000 (0:00:01.589) 0:03:43.892 **** 2026-02-04 04:04:16.561633 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 04:04:16.561644 | orchestrator | 2026-02-04 04:04:16.561655 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-04 04:04:16.561675 | orchestrator | Wednesday 04 February 2026 04:04:10 +0000 (0:00:01.688) 0:03:45.580 **** 2026-02-04 04:04:16.561687 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:04:16.561698 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:04:16.561709 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:04:16.561720 | orchestrator | 2026-02-04 04:04:16.561730 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-04 04:04:16.561741 | orchestrator | Wednesday 04 February 2026 04:04:12 +0000 (0:00:01.377) 0:03:46.958 **** 2026-02-04 04:04:16.561752 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:04:16.561763 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:04:16.561773 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:04:16.561784 | orchestrator | 2026-02-04 04:04:16.561795 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-04 04:04:16.561806 | orchestrator | Wednesday 04 February 2026 04:04:13 +0000 (0:00:01.381) 0:03:48.339 **** 2026-02-04 04:04:16.561816 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:04:16.561827 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:04:16.561838 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:04:16.561849 | orchestrator | 2026-02-04 04:04:16.561860 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-04 04:04:16.561871 | orchestrator | Wednesday 04 February 2026 04:04:14 +0000 (0:00:01.341) 0:03:49.680 **** 2026-02-04 04:04:16.561882 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:04:16.561893 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:04:16.561903 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:04:16.561914 | orchestrator | 2026-02-04 04:04:16.561925 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-04 04:04:16.561945 | orchestrator | Wednesday 04 February 2026 04:04:16 +0000 (0:00:01.792) 0:03:51.473 **** 2026-02-04 04:05:28.222123 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:05:28.222240 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:05:28.222255 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:05:28.222266 | orchestrator | 2026-02-04 04:05:28.222279 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-04 04:05:28.222291 | orchestrator | Wednesday 04 February 2026 04:04:18 +0000 (0:00:02.379) 0:03:53.853 **** 2026-02-04 04:05:28.222302 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:05:28.222376 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:05:28.222389 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:05:28.222401 | orchestrator | 2026-02-04 04:05:28.222412 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-04 04:05:28.222423 | orchestrator | Wednesday 04 February 2026 04:04:21 +0000 (0:00:02.340) 0:03:56.194 **** 2026-02-04 04:05:28.222435 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:05:28.222447 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:05:28.222458 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:05:28.222469 | orchestrator | 2026-02-04 04:05:28.222480 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-04 04:05:28.222491 | orchestrator | 2026-02-04 04:05:28.222502 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-04 04:05:28.222552 | orchestrator | Wednesday 04 February 2026 04:04:29 +0000 (0:00:07.764) 0:04:03.959 **** 2026-02-04 04:05:28.222564 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.222575 | orchestrator | 2026-02-04 04:05:28.222588 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-04 04:05:28.222603 | orchestrator | Wednesday 04 February 2026 04:04:31 +0000 (0:00:02.073) 0:04:06.032 **** 2026-02-04 04:05:28.222616 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.222629 | orchestrator | 2026-02-04 04:05:28.222642 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-04 04:05:28.222655 | orchestrator | Wednesday 04 February 2026 04:04:32 +0000 (0:00:01.421) 0:04:07.453 **** 2026-02-04 04:05:28.222669 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 04:05:28.222682 | orchestrator | 2026-02-04 04:05:28.222696 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-04 04:05:28.222709 | orchestrator | Wednesday 04 February 2026 04:04:34 +0000 (0:00:01.566) 0:04:09.020 **** 2026-02-04 04:05:28.222722 | orchestrator | changed: [testbed-manager] 2026-02-04 04:05:28.222735 | orchestrator | 2026-02-04 04:05:28.222748 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-04 04:05:28.222762 | orchestrator | Wednesday 04 February 2026 04:04:36 +0000 (0:00:02.015) 0:04:11.036 **** 2026-02-04 04:05:28.222776 | orchestrator | changed: [testbed-manager] 2026-02-04 04:05:28.222789 | orchestrator | 2026-02-04 04:05:28.222802 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-04 04:05:28.222815 | orchestrator | Wednesday 04 February 2026 04:04:37 +0000 (0:00:01.602) 0:04:12.638 **** 2026-02-04 04:05:28.222828 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 04:05:28.222841 | orchestrator | 2026-02-04 04:05:28.222855 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-04 04:05:28.222868 | orchestrator | Wednesday 04 February 2026 04:04:40 +0000 (0:00:02.901) 0:04:15.540 **** 2026-02-04 04:05:28.222880 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-04 04:05:28.222893 | orchestrator | 2026-02-04 04:05:28.222907 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-04 04:05:28.222920 | orchestrator | Wednesday 04 February 2026 04:04:42 +0000 (0:00:01.818) 0:04:17.359 **** 2026-02-04 04:05:28.222934 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.222945 | orchestrator | 2026-02-04 04:05:28.222956 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-04 04:05:28.222967 | orchestrator | Wednesday 04 February 2026 04:04:43 +0000 (0:00:01.406) 0:04:18.765 **** 2026-02-04 04:05:28.222978 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.222989 | orchestrator | 2026-02-04 04:05:28.222999 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-04 04:05:28.223010 | orchestrator | 2026-02-04 04:05:28.223021 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-04 04:05:28.223032 | orchestrator | Wednesday 04 February 2026 04:04:45 +0000 (0:00:01.675) 0:04:20.441 **** 2026-02-04 04:05:28.223043 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.223054 | orchestrator | 2026-02-04 04:05:28.223065 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-04 04:05:28.223076 | orchestrator | Wednesday 04 February 2026 04:04:46 +0000 (0:00:01.141) 0:04:21.583 **** 2026-02-04 04:05:28.223086 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 04:05:28.223098 | orchestrator | 2026-02-04 04:05:28.223109 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-04 04:05:28.223120 | orchestrator | Wednesday 04 February 2026 04:04:48 +0000 (0:00:01.537) 0:04:23.120 **** 2026-02-04 04:05:28.223131 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.223141 | orchestrator | 2026-02-04 04:05:28.223152 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-04 04:05:28.223163 | orchestrator | Wednesday 04 February 2026 04:04:50 +0000 (0:00:01.824) 0:04:24.945 **** 2026-02-04 04:05:28.223181 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.223192 | orchestrator | 2026-02-04 04:05:28.223203 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-04 04:05:28.223214 | orchestrator | Wednesday 04 February 2026 04:04:52 +0000 (0:00:02.599) 0:04:27.545 **** 2026-02-04 04:05:28.223226 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.223237 | orchestrator | 2026-02-04 04:05:28.223248 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-04 04:05:28.223259 | orchestrator | Wednesday 04 February 2026 04:04:54 +0000 (0:00:01.474) 0:04:29.019 **** 2026-02-04 04:05:28.223270 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.223281 | orchestrator | 2026-02-04 04:05:28.223292 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-04 04:05:28.223303 | orchestrator | Wednesday 04 February 2026 04:04:55 +0000 (0:00:01.496) 0:04:30.516 **** 2026-02-04 04:05:28.223332 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.223344 | orchestrator | 2026-02-04 04:05:28.223372 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-04 04:05:28.223384 | orchestrator | Wednesday 04 February 2026 04:04:57 +0000 (0:00:01.691) 0:04:32.207 **** 2026-02-04 04:05:28.223394 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.223405 | orchestrator | 2026-02-04 04:05:28.223416 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-04 04:05:28.223427 | orchestrator | Wednesday 04 February 2026 04:04:59 +0000 (0:00:02.555) 0:04:34.763 **** 2026-02-04 04:05:28.223437 | orchestrator | ok: [testbed-manager] 2026-02-04 04:05:28.223448 | orchestrator | 2026-02-04 04:05:28.223459 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-04 04:05:28.223469 | orchestrator | 2026-02-04 04:05:28.223480 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-04 04:05:28.223491 | orchestrator | Wednesday 04 February 2026 04:05:01 +0000 (0:00:01.666) 0:04:36.429 **** 2026-02-04 04:05:28.223502 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:05:28.223513 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:05:28.223523 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:05:28.223534 | orchestrator | 2026-02-04 04:05:28.223545 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-04 04:05:28.223556 | orchestrator | Wednesday 04 February 2026 04:05:02 +0000 (0:00:01.375) 0:04:37.805 **** 2026-02-04 04:05:28.223567 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:05:28.223577 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:05:28.223588 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:05:28.223599 | orchestrator | 2026-02-04 04:05:28.223610 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-04 04:05:28.223621 | orchestrator | Wednesday 04 February 2026 04:05:04 +0000 (0:00:01.551) 0:04:39.356 **** 2026-02-04 04:05:28.223631 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:05:28.223642 | orchestrator | 2026-02-04 04:05:28.223653 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-04 04:05:28.223664 | orchestrator | Wednesday 04 February 2026 04:05:06 +0000 (0:00:01.793) 0:04:41.150 **** 2026-02-04 04:05:28.223674 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 04:05:28.223685 | orchestrator | 2026-02-04 04:05:28.223696 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-04 04:05:28.223706 | orchestrator | Wednesday 04 February 2026 04:05:08 +0000 (0:00:01.833) 0:04:42.984 **** 2026-02-04 04:05:28.223717 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 04:05:28.223728 | orchestrator | 2026-02-04 04:05:28.223738 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-04 04:05:28.223749 | orchestrator | Wednesday 04 February 2026 04:05:09 +0000 (0:00:01.855) 0:04:44.839 **** 2026-02-04 04:05:28.223760 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:05:28.223778 | orchestrator | 2026-02-04 04:05:28.223789 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-04 04:05:28.223799 | orchestrator | Wednesday 04 February 2026 04:05:11 +0000 (0:00:01.165) 0:04:46.005 **** 2026-02-04 04:05:28.223810 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 04:05:28.223821 | orchestrator | 2026-02-04 04:05:28.223831 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-04 04:05:28.223842 | orchestrator | Wednesday 04 February 2026 04:05:13 +0000 (0:00:01.967) 0:04:47.973 **** 2026-02-04 04:05:28.223853 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 04:05:28.223864 | orchestrator | 2026-02-04 04:05:28.223874 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-04 04:05:28.223885 | orchestrator | Wednesday 04 February 2026 04:05:15 +0000 (0:00:02.255) 0:04:50.229 **** 2026-02-04 04:05:28.223896 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 04:05:28.223906 | orchestrator | 2026-02-04 04:05:28.223917 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-04 04:05:28.223928 | orchestrator | Wednesday 04 February 2026 04:05:16 +0000 (0:00:01.243) 0:04:51.472 **** 2026-02-04 04:05:28.223939 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 04:05:28.223949 | orchestrator | 2026-02-04 04:05:28.223960 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-04 04:05:28.223971 | orchestrator | Wednesday 04 February 2026 04:05:17 +0000 (0:00:01.213) 0:04:52.685 **** 2026-02-04 04:05:28.223982 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-04 04:05:28.223992 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-04 04:05:28.224004 | orchestrator | } 2026-02-04 04:05:28.224015 | orchestrator | 2026-02-04 04:05:28.224026 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-04 04:05:28.224036 | orchestrator | Wednesday 04 February 2026 04:05:18 +0000 (0:00:01.194) 0:04:53.880 **** 2026-02-04 04:05:28.224047 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:05:28.224058 | orchestrator | 2026-02-04 04:05:28.224068 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-04 04:05:28.224086 | orchestrator | Wednesday 04 February 2026 04:05:20 +0000 (0:00:01.139) 0:04:55.020 **** 2026-02-04 04:05:28.224098 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-04 04:05:28.224109 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-04 04:05:28.224120 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-04 04:05:28.224131 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-04 04:05:28.224141 | orchestrator | 2026-02-04 04:05:28.224152 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-04 04:05:28.224163 | orchestrator | Wednesday 04 February 2026 04:05:25 +0000 (0:00:05.688) 0:05:00.709 **** 2026-02-04 04:05:28.224174 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-04 04:05:28.224185 | orchestrator | 2026-02-04 04:05:28.224195 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-04 04:05:28.224212 | orchestrator | Wednesday 04 February 2026 04:05:28 +0000 (0:00:02.428) 0:05:03.138 **** 2026-02-04 04:06:06.368978 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 04:06:06.369095 | orchestrator | 2026-02-04 04:06:06.369111 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-04 04:06:06.369124 | orchestrator | Wednesday 04 February 2026 04:05:30 +0000 (0:00:02.543) 0:05:05.681 **** 2026-02-04 04:06:06.369135 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-04 04:06:06.369146 | orchestrator | 2026-02-04 04:06:06.369158 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-04 04:06:06.369170 | orchestrator | Wednesday 04 February 2026 04:05:34 +0000 (0:00:04.059) 0:05:09.741 **** 2026-02-04 04:06:06.369181 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:06:06.369214 | orchestrator | 2026-02-04 04:06:06.369226 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-04 04:06:06.369237 | orchestrator | Wednesday 04 February 2026 04:05:36 +0000 (0:00:01.188) 0:05:10.930 **** 2026-02-04 04:06:06.369248 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-04 04:06:06.369260 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-04 04:06:06.369321 | orchestrator | 2026-02-04 04:06:06.369348 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-04 04:06:06.369360 | orchestrator | Wednesday 04 February 2026 04:05:38 +0000 (0:00:02.982) 0:05:13.913 **** 2026-02-04 04:06:06.369371 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:06:06.369381 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:06:06.369392 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:06:06.369403 | orchestrator | 2026-02-04 04:06:06.369414 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-04 04:06:06.369424 | orchestrator | Wednesday 04 February 2026 04:05:40 +0000 (0:00:01.347) 0:05:15.261 **** 2026-02-04 04:06:06.369435 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:06:06.369446 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:06:06.369457 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:06:06.369468 | orchestrator | 2026-02-04 04:06:06.369479 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-04 04:06:06.369490 | orchestrator | 2026-02-04 04:06:06.369500 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-04 04:06:06.369511 | orchestrator | Wednesday 04 February 2026 04:05:42 +0000 (0:00:02.040) 0:05:17.302 **** 2026-02-04 04:06:06.369524 | orchestrator | ok: [testbed-manager] 2026-02-04 04:06:06.369537 | orchestrator | 2026-02-04 04:06:06.369551 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-04 04:06:06.369565 | orchestrator | Wednesday 04 February 2026 04:05:43 +0000 (0:00:01.111) 0:05:18.413 **** 2026-02-04 04:06:06.369578 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-04 04:06:06.369592 | orchestrator | 2026-02-04 04:06:06.369605 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-04 04:06:06.369618 | orchestrator | Wednesday 04 February 2026 04:05:44 +0000 (0:00:01.441) 0:05:19.855 **** 2026-02-04 04:06:06.369632 | orchestrator | ok: [testbed-manager] 2026-02-04 04:06:06.369644 | orchestrator | 2026-02-04 04:06:06.369657 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-04 04:06:06.369671 | orchestrator | 2026-02-04 04:06:06.369684 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-04 04:06:06.369697 | orchestrator | Wednesday 04 February 2026 04:05:49 +0000 (0:00:05.041) 0:05:24.896 **** 2026-02-04 04:06:06.369710 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:06:06.369723 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:06:06.369737 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:06:06.369749 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:06:06.369763 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:06:06.369776 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:06:06.369789 | orchestrator | 2026-02-04 04:06:06.369801 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-04 04:06:06.369814 | orchestrator | Wednesday 04 February 2026 04:05:51 +0000 (0:00:01.995) 0:05:26.892 **** 2026-02-04 04:06:06.369829 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 04:06:06.369842 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 04:06:06.369856 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-04 04:06:06.369870 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 04:06:06.369882 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 04:06:06.369900 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-04 04:06:06.369911 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 04:06:06.369922 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 04:06:06.369933 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-04 04:06:06.369944 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 04:06:06.369954 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 04:06:06.369965 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 04:06:06.369976 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-04 04:06:06.369987 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 04:06:06.369998 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-04 04:06:06.370092 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 04:06:06.370107 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 04:06:06.370118 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-04 04:06:06.370128 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 04:06:06.370139 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 04:06:06.370150 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-04 04:06:06.370161 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 04:06:06.370171 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 04:06:06.370182 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-04 04:06:06.370193 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 04:06:06.370204 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 04:06:06.370215 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-04 04:06:06.370226 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 04:06:06.370237 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 04:06:06.370248 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-04 04:06:06.370259 | orchestrator | 2026-02-04 04:06:06.370287 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-04 04:06:06.370298 | orchestrator | Wednesday 04 February 2026 04:06:01 +0000 (0:00:10.003) 0:05:36.896 **** 2026-02-04 04:06:06.370309 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:06:06.370320 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:06:06.370331 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:06:06.370341 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:06:06.370352 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:06:06.370363 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:06:06.370374 | orchestrator | 2026-02-04 04:06:06.370384 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-04 04:06:06.370395 | orchestrator | Wednesday 04 February 2026 04:06:03 +0000 (0:00:01.855) 0:05:38.752 **** 2026-02-04 04:06:06.370406 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:06:06.370417 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:06:06.370438 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:06:06.370449 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:06:06.370459 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:06:06.370470 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:06:06.370481 | orchestrator | 2026-02-04 04:06:06.370492 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:06:06.370503 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 04:06:06.370517 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 04:06:06.370528 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 04:06:06.370539 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-04 04:06:06.370550 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 04:06:06.370561 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 04:06:06.370572 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-04 04:06:06.370582 | orchestrator | 2026-02-04 04:06:06.370593 | orchestrator | 2026-02-04 04:06:06.370605 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:06:06.370616 | orchestrator | Wednesday 04 February 2026 04:06:06 +0000 (0:00:02.520) 0:05:41.272 **** 2026-02-04 04:06:06.370626 | orchestrator | =============================================================================== 2026-02-04 04:06:06.370637 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.93s 2026-02-04 04:06:06.370648 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 54.73s 2026-02-04 04:06:06.370659 | orchestrator | Manage labels ---------------------------------------------------------- 10.00s 2026-02-04 04:06:06.370670 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 7.76s 2026-02-04 04:06:06.370681 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.69s 2026-02-04 04:06:06.370692 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.04s 2026-02-04 04:06:06.370709 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.33s 2026-02-04 04:06:06.811742 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.15s 2026-02-04 04:06:06.811825 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.06s 2026-02-04 04:06:06.811837 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 3.25s 2026-02-04 04:06:06.811845 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 3.08s 2026-02-04 04:06:06.811852 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.98s 2026-02-04 04:06:06.811860 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.90s 2026-02-04 04:06:06.811867 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.67s 2026-02-04 04:06:06.811876 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.60s 2026-02-04 04:06:06.811907 | orchestrator | kubectl : Install required packages ------------------------------------- 2.55s 2026-02-04 04:06:06.811915 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.54s 2026-02-04 04:06:06.811940 | orchestrator | Manage taints ----------------------------------------------------------- 2.52s 2026-02-04 04:06:06.811948 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.44s 2026-02-04 04:06:06.811955 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.44s 2026-02-04 04:06:07.132443 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-04 04:06:07.132555 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-04 04:06:07.140563 | orchestrator | + set -e 2026-02-04 04:06:07.140643 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 04:06:07.140657 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 04:06:07.140669 | orchestrator | ++ INTERACTIVE=false 2026-02-04 04:06:07.140680 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 04:06:07.140691 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 04:06:07.140702 | orchestrator | + osism apply openstackclient 2026-02-04 04:06:19.315872 | orchestrator | 2026-02-04 04:06:19 | INFO  | Task f90d5921-74db-4ae5-828d-8904effc0acc (openstackclient) was prepared for execution. 2026-02-04 04:06:19.315981 | orchestrator | 2026-02-04 04:06:19 | INFO  | It takes a moment until task f90d5921-74db-4ae5-828d-8904effc0acc (openstackclient) has been started and output is visible here. 2026-02-04 04:06:54.668776 | orchestrator | 2026-02-04 04:06:54.668913 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-04 04:06:54.668930 | orchestrator | 2026-02-04 04:06:54.668941 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-04 04:06:54.668952 | orchestrator | Wednesday 04 February 2026 04:06:26 +0000 (0:00:02.234) 0:00:02.234 **** 2026-02-04 04:06:54.668963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-04 04:06:54.668974 | orchestrator | 2026-02-04 04:06:54.668984 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-04 04:06:54.668994 | orchestrator | Wednesday 04 February 2026 04:06:27 +0000 (0:00:01.826) 0:00:04.061 **** 2026-02-04 04:06:54.669004 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-04 04:06:54.669015 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-04 04:06:54.669025 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-04 04:06:54.669035 | orchestrator | 2026-02-04 04:06:54.669045 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-04 04:06:54.669055 | orchestrator | Wednesday 04 February 2026 04:06:30 +0000 (0:00:02.336) 0:00:06.397 **** 2026-02-04 04:06:54.669065 | orchestrator | changed: [testbed-manager] 2026-02-04 04:06:54.669075 | orchestrator | 2026-02-04 04:06:54.669085 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-04 04:06:54.669095 | orchestrator | Wednesday 04 February 2026 04:06:32 +0000 (0:00:02.194) 0:00:08.592 **** 2026-02-04 04:06:54.669105 | orchestrator | ok: [testbed-manager] 2026-02-04 04:06:54.669116 | orchestrator | 2026-02-04 04:06:54.669126 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-04 04:06:54.669136 | orchestrator | Wednesday 04 February 2026 04:06:34 +0000 (0:00:02.100) 0:00:10.693 **** 2026-02-04 04:06:54.669146 | orchestrator | ok: [testbed-manager] 2026-02-04 04:06:54.669155 | orchestrator | 2026-02-04 04:06:54.669165 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-04 04:06:54.669175 | orchestrator | Wednesday 04 February 2026 04:06:36 +0000 (0:00:01.805) 0:00:12.499 **** 2026-02-04 04:06:54.669185 | orchestrator | ok: [testbed-manager] 2026-02-04 04:06:54.669194 | orchestrator | 2026-02-04 04:06:54.669245 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-04 04:06:54.669256 | orchestrator | Wednesday 04 February 2026 04:06:37 +0000 (0:00:01.366) 0:00:13.866 **** 2026-02-04 04:06:54.669266 | orchestrator | changed: [testbed-manager] 2026-02-04 04:06:54.669276 | orchestrator | 2026-02-04 04:06:54.669285 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-04 04:06:54.669321 | orchestrator | Wednesday 04 February 2026 04:06:48 +0000 (0:00:11.131) 0:00:24.997 **** 2026-02-04 04:06:54.669331 | orchestrator | changed: [testbed-manager] 2026-02-04 04:06:54.669341 | orchestrator | 2026-02-04 04:06:54.669351 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-04 04:06:54.669360 | orchestrator | Wednesday 04 February 2026 04:06:50 +0000 (0:00:01.999) 0:00:26.997 **** 2026-02-04 04:06:54.669370 | orchestrator | changed: [testbed-manager] 2026-02-04 04:06:54.669380 | orchestrator | 2026-02-04 04:06:54.669389 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-04 04:06:54.669399 | orchestrator | Wednesday 04 February 2026 04:06:52 +0000 (0:00:01.623) 0:00:28.620 **** 2026-02-04 04:06:54.669409 | orchestrator | ok: [testbed-manager] 2026-02-04 04:06:54.669418 | orchestrator | 2026-02-04 04:06:54.669428 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:06:54.669437 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-04 04:06:54.669448 | orchestrator | 2026-02-04 04:06:54.669457 | orchestrator | 2026-02-04 04:06:54.669467 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:06:54.669477 | orchestrator | Wednesday 04 February 2026 04:06:54 +0000 (0:00:01.888) 0:00:30.508 **** 2026-02-04 04:06:54.669486 | orchestrator | =============================================================================== 2026-02-04 04:06:54.669496 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.13s 2026-02-04 04:06:54.669506 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.34s 2026-02-04 04:06:54.669515 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.19s 2026-02-04 04:06:54.669538 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.10s 2026-02-04 04:06:54.669548 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.00s 2026-02-04 04:06:54.669558 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.89s 2026-02-04 04:06:54.669567 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.83s 2026-02-04 04:06:54.669577 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.81s 2026-02-04 04:06:54.669587 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.62s 2026-02-04 04:06:54.669596 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.37s 2026-02-04 04:06:55.012350 | orchestrator | + osism apply -a upgrade common 2026-02-04 04:06:57.182814 | orchestrator | 2026-02-04 04:06:57 | INFO  | Task 213eb210-589f-4663-8d84-b4596bfa67e9 (common) was prepared for execution. 2026-02-04 04:06:57.182914 | orchestrator | 2026-02-04 04:06:57 | INFO  | It takes a moment until task 213eb210-589f-4663-8d84-b4596bfa67e9 (common) has been started and output is visible here. 2026-02-04 04:07:12.962447 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-04 04:07:12.962570 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-04 04:07:12.962597 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-04 04:07:12.962607 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-04 04:07:12.962628 | orchestrator | 2026-02-04 04:07:12.962655 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-04 04:07:12.963516 | orchestrator | 2026-02-04 04:07:12.963576 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-04 04:07:12.963586 | orchestrator | Wednesday 04 February 2026 04:07:03 +0000 (0:00:01.466) 0:00:01.466 **** 2026-02-04 04:07:12.963621 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 04:07:12.963631 | orchestrator | 2026-02-04 04:07:12.963638 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-04 04:07:12.963645 | orchestrator | Wednesday 04 February 2026 04:07:05 +0000 (0:00:02.140) 0:00:03.607 **** 2026-02-04 04:07:12.963653 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 04:07:12.963660 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 04:07:12.963666 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 04:07:12.963673 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 04:07:12.963681 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 04:07:12.963688 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 04:07:12.963695 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 04:07:12.963703 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 04:07:12.963710 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 04:07:12.963717 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-04 04:07:12.963724 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 04:07:12.963728 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 04:07:12.963733 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 04:07:12.963737 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 04:07:12.963741 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 04:07:12.963746 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 04:07:12.963750 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-04 04:07:12.963754 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 04:07:12.963758 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 04:07:12.963762 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 04:07:12.963766 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-04 04:07:12.963770 | orchestrator | 2026-02-04 04:07:12.963774 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-04 04:07:12.963778 | orchestrator | Wednesday 04 February 2026 04:07:07 +0000 (0:00:02.604) 0:00:06.211 **** 2026-02-04 04:07:12.963782 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 04:07:12.963788 | orchestrator | 2026-02-04 04:07:12.963794 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-04 04:07:12.963801 | orchestrator | Wednesday 04 February 2026 04:07:10 +0000 (0:00:02.103) 0:00:08.315 **** 2026-02-04 04:07:12.963811 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:12.963863 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:12.963873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:12.963881 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:12.964016 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:12.964025 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:12.964030 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:12.964035 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:12.964055 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115005 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115112 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115146 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115165 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115236 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:15.115250 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115285 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115316 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115328 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115340 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115357 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115369 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:15.115381 | orchestrator | 2026-02-04 04:07:15.115393 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-04 04:07:15.115405 | orchestrator | Wednesday 04 February 2026 04:07:14 +0000 (0:00:04.296) 0:00:12.611 **** 2026-02-04 04:07:15.115418 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:15.115432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:15.115452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:15.115474 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.041700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.041803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:16.041821 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:07:16.041873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.041887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.041899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.041932 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:07:16.041944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:16.041956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.041985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:16.041998 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:07:16.042009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.042097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.042110 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:07:16.042121 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.042133 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:07:16.042153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:16.042165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:16.042177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.042255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:16.042276 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:07:16.042312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.385588 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.385716 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:07:18.385745 | orchestrator | 2026-02-04 04:07:18.385765 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-04 04:07:18.385783 | orchestrator | Wednesday 04 February 2026 04:07:16 +0000 (0:00:01.677) 0:00:14.289 **** 2026-02-04 04:07:18.385801 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:18.385933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:18.385957 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.385975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:18.385993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.386102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.386128 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.386147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.386203 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:07:18.386223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:18.386241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.386271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.386288 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:07:18.386305 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:07:18.386323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:18.386401 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:07:18.386419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:18.386465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:26.048113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:26.048340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:26.048364 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:07:26.048378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:26.048391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:26.048403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:26.048414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:26.048426 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:07:26.048437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:26.048448 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:07:26.048459 | orchestrator | 2026-02-04 04:07:26.048471 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-04 04:07:26.048490 | orchestrator | Wednesday 04 February 2026 04:07:18 +0000 (0:00:02.347) 0:00:16.636 **** 2026-02-04 04:07:26.048501 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:07:26.048512 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:07:26.048523 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:07:26.048551 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:07:26.048569 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:07:26.048580 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:07:26.048591 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:07:26.048601 | orchestrator | 2026-02-04 04:07:26.048612 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-04 04:07:26.048623 | orchestrator | Wednesday 04 February 2026 04:07:19 +0000 (0:00:00.982) 0:00:17.619 **** 2026-02-04 04:07:26.048636 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:07:26.048650 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:07:26.048663 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:07:26.048676 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:07:26.048689 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:07:26.048702 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:07:26.048715 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:07:26.048728 | orchestrator | 2026-02-04 04:07:26.048741 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-04 04:07:26.048753 | orchestrator | Wednesday 04 February 2026 04:07:20 +0000 (0:00:00.953) 0:00:18.573 **** 2026-02-04 04:07:26.048766 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:07:26.048779 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:07:26.048793 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:07:26.048806 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:07:26.048819 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:07:26.048832 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:07:26.048845 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:07:26.048858 | orchestrator | 2026-02-04 04:07:26.048871 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-04 04:07:26.048883 | orchestrator | Wednesday 04 February 2026 04:07:21 +0000 (0:00:00.811) 0:00:19.384 **** 2026-02-04 04:07:26.048897 | orchestrator | changed: [testbed-manager] 2026-02-04 04:07:26.048910 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:07:26.048922 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:07:26.048935 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:07:26.048948 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:07:26.048960 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:07:26.048974 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:07:26.048987 | orchestrator | 2026-02-04 04:07:26.048998 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-04 04:07:26.049008 | orchestrator | Wednesday 04 February 2026 04:07:22 +0000 (0:00:01.875) 0:00:21.260 **** 2026-02-04 04:07:26.049020 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:26.049032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:26.049051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:26.049062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:26.049087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:26.994503 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994636 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:26.994664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:26.994685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994809 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994830 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.994984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.995010 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:26.995053 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:39.829145 | orchestrator | 2026-02-04 04:07:39.829300 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-04 04:07:39.829315 | orchestrator | Wednesday 04 February 2026 04:07:26 +0000 (0:00:03.983) 0:00:25.244 **** 2026-02-04 04:07:39.829324 | orchestrator | [WARNING]: Skipped 2026-02-04 04:07:39.829333 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-04 04:07:39.829342 | orchestrator | to this access issue: 2026-02-04 04:07:39.829350 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-04 04:07:39.829358 | orchestrator | directory 2026-02-04 04:07:39.829367 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 04:07:39.829375 | orchestrator | 2026-02-04 04:07:39.829384 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-04 04:07:39.829392 | orchestrator | Wednesday 04 February 2026 04:07:28 +0000 (0:00:01.322) 0:00:26.566 **** 2026-02-04 04:07:39.829399 | orchestrator | [WARNING]: Skipped 2026-02-04 04:07:39.829407 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-04 04:07:39.829415 | orchestrator | to this access issue: 2026-02-04 04:07:39.829424 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-04 04:07:39.829432 | orchestrator | directory 2026-02-04 04:07:39.829444 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 04:07:39.829490 | orchestrator | 2026-02-04 04:07:39.829506 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-04 04:07:39.829518 | orchestrator | Wednesday 04 February 2026 04:07:29 +0000 (0:00:00.906) 0:00:27.473 **** 2026-02-04 04:07:39.829531 | orchestrator | [WARNING]: Skipped 2026-02-04 04:07:39.829543 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-04 04:07:39.829555 | orchestrator | to this access issue: 2026-02-04 04:07:39.829567 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-04 04:07:39.829580 | orchestrator | directory 2026-02-04 04:07:39.829594 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 04:07:39.829607 | orchestrator | 2026-02-04 04:07:39.829622 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-04 04:07:39.829631 | orchestrator | Wednesday 04 February 2026 04:07:30 +0000 (0:00:00.913) 0:00:28.387 **** 2026-02-04 04:07:39.829639 | orchestrator | [WARNING]: Skipped 2026-02-04 04:07:39.829647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-04 04:07:39.829655 | orchestrator | to this access issue: 2026-02-04 04:07:39.829663 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-04 04:07:39.829671 | orchestrator | directory 2026-02-04 04:07:39.829679 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-04 04:07:39.829687 | orchestrator | 2026-02-04 04:07:39.829695 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-04 04:07:39.829703 | orchestrator | Wednesday 04 February 2026 04:07:31 +0000 (0:00:00.936) 0:00:29.323 **** 2026-02-04 04:07:39.829710 | orchestrator | changed: [testbed-manager] 2026-02-04 04:07:39.829718 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:07:39.829726 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:07:39.829734 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:07:39.829742 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:07:39.829750 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:07:39.829758 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:07:39.829766 | orchestrator | 2026-02-04 04:07:39.829779 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-04 04:07:39.829792 | orchestrator | Wednesday 04 February 2026 04:07:33 +0000 (0:00:02.896) 0:00:32.220 **** 2026-02-04 04:07:39.829805 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 04:07:39.829821 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 04:07:39.829833 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 04:07:39.829845 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 04:07:39.829858 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 04:07:39.829871 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 04:07:39.829884 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-04 04:07:39.829897 | orchestrator | 2026-02-04 04:07:39.829908 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-04 04:07:39.829921 | orchestrator | Wednesday 04 February 2026 04:07:36 +0000 (0:00:02.189) 0:00:34.409 **** 2026-02-04 04:07:39.829934 | orchestrator | ok: [testbed-manager] 2026-02-04 04:07:39.829949 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:07:39.829963 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:07:39.829976 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:07:39.829988 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:07:39.830010 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:07:39.830078 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:07:39.830087 | orchestrator | 2026-02-04 04:07:39.830095 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-04 04:07:39.830113 | orchestrator | Wednesday 04 February 2026 04:07:37 +0000 (0:00:01.770) 0:00:36.180 **** 2026-02-04 04:07:39.830142 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:39.830208 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:39.830227 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:39.830244 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:39.830258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:39.830271 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:39.830294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:39.830320 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:44.136037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:44.136120 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:44.136133 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:44.136142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:44.136148 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:44.136195 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:44.136224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:44.136243 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:44.136250 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:44.136257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:44.136264 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:44.136270 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:44.136290 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:44.136297 | orchestrator | 2026-02-04 04:07:44.136305 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-04 04:07:44.136312 | orchestrator | Wednesday 04 February 2026 04:07:39 +0000 (0:00:02.040) 0:00:38.221 **** 2026-02-04 04:07:44.136323 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 04:07:44.136330 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 04:07:44.136336 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 04:07:44.136342 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 04:07:44.136349 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 04:07:44.136355 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 04:07:44.136364 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-04 04:07:44.136371 | orchestrator | 2026-02-04 04:07:44.136377 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-04 04:07:44.136383 | orchestrator | Wednesday 04 February 2026 04:07:41 +0000 (0:00:01.983) 0:00:40.205 **** 2026-02-04 04:07:44.136390 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 04:07:44.136396 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 04:07:44.136402 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 04:07:44.136408 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 04:07:44.136419 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 04:07:46.450547 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 04:07:46.450624 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-04 04:07:46.450631 | orchestrator | 2026-02-04 04:07:46.450636 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-04 04:07:46.450641 | orchestrator | Wednesday 04 February 2026 04:07:44 +0000 (0:00:02.184) 0:00:42.389 **** 2026-02-04 04:07:46.450647 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:46.450655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:46.450659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:46.450663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:46.450683 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:46.450698 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:46.450714 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-04 04:07:46.450719 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:46.450723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:46.450727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:46.450735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:46.450739 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:46.450746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:46.450758 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:48.823576 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:48.823680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:48.823697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:48.823737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:48.823749 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:48.823760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:48.823789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:07:48.823802 | orchestrator | 2026-02-04 04:07:48.823815 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-04 04:07:48.823827 | orchestrator | Wednesday 04 February 2026 04:07:47 +0000 (0:00:03.136) 0:00:45.525 **** 2026-02-04 04:07:48.823839 | orchestrator | changed: [testbed-manager] => { 2026-02-04 04:07:48.823850 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:07:48.823861 | orchestrator | } 2026-02-04 04:07:48.823872 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:07:48.823883 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:07:48.823894 | orchestrator | } 2026-02-04 04:07:48.823905 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:07:48.823915 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:07:48.823926 | orchestrator | } 2026-02-04 04:07:48.823937 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:07:48.823947 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:07:48.823958 | orchestrator | } 2026-02-04 04:07:48.823969 | orchestrator | changed: [testbed-node-3] => { 2026-02-04 04:07:48.823979 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:07:48.824012 | orchestrator | } 2026-02-04 04:07:48.824024 | orchestrator | changed: [testbed-node-4] => { 2026-02-04 04:07:48.824034 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:07:48.824045 | orchestrator | } 2026-02-04 04:07:48.824055 | orchestrator | changed: [testbed-node-5] => { 2026-02-04 04:07:48.824066 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:07:48.824076 | orchestrator | } 2026-02-04 04:07:48.824087 | orchestrator | 2026-02-04 04:07:48.824098 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:07:48.824127 | orchestrator | Wednesday 04 February 2026 04:07:48 +0000 (0:00:01.050) 0:00:46.576 **** 2026-02-04 04:07:48.824140 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:48.824199 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:48.824212 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:48.824224 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:07:48.824236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:48.824248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:48.824260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:48.824271 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:07:48.824282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:48.824305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326631 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:07:51.326651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:51.326687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326712 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:07:51.326729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:51.326742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326754 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326787 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-04 04:07:51.326800 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-04 04:07:51.326824 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:07:51.326855 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:51.326868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326891 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:07:51.326902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-04 04:07:51.326919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:07:51.326977 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:07:51.326989 | orchestrator | 2026-02-04 04:07:51.327001 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 04:07:51.327012 | orchestrator | Wednesday 04 February 2026 04:07:50 +0000 (0:00:02.155) 0:00:48.731 **** 2026-02-04 04:07:51.327036 | orchestrator | 2026-02-04 04:07:51.327047 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 04:07:51.327058 | orchestrator | Wednesday 04 February 2026 04:07:50 +0000 (0:00:00.079) 0:00:48.811 **** 2026-02-04 04:07:51.327069 | orchestrator | 2026-02-04 04:07:51.327080 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 04:07:51.327091 | orchestrator | Wednesday 04 February 2026 04:07:50 +0000 (0:00:00.075) 0:00:48.886 **** 2026-02-04 04:07:51.327102 | orchestrator | 2026-02-04 04:07:51.327113 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 04:07:51.327123 | orchestrator | Wednesday 04 February 2026 04:07:50 +0000 (0:00:00.086) 0:00:48.972 **** 2026-02-04 04:07:51.327134 | orchestrator | 2026-02-04 04:07:51.327187 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 04:07:51.327201 | orchestrator | Wednesday 04 February 2026 04:07:50 +0000 (0:00:00.072) 0:00:49.045 **** 2026-02-04 04:07:51.327212 | orchestrator | 2026-02-04 04:07:51.327230 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 04:09:18.491913 | orchestrator | Wednesday 04 February 2026 04:07:51 +0000 (0:00:00.340) 0:00:49.386 **** 2026-02-04 04:09:18.491994 | orchestrator | 2026-02-04 04:09:18.492001 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-04 04:09:18.492006 | orchestrator | Wednesday 04 February 2026 04:07:51 +0000 (0:00:00.072) 0:00:49.458 **** 2026-02-04 04:09:18.492010 | orchestrator | 2026-02-04 04:09:18.492014 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-04 04:09:18.492018 | orchestrator | Wednesday 04 February 2026 04:07:51 +0000 (0:00:00.103) 0:00:49.562 **** 2026-02-04 04:09:18.492022 | orchestrator | changed: [testbed-manager] 2026-02-04 04:09:18.492027 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:09:18.492031 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:09:18.492035 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:09:18.492039 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:09:18.492043 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:09:18.492047 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:09:18.492050 | orchestrator | 2026-02-04 04:09:18.492054 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-04 04:09:18.492088 | orchestrator | Wednesday 04 February 2026 04:08:24 +0000 (0:00:33.606) 0:01:23.168 **** 2026-02-04 04:09:18.492093 | orchestrator | changed: [testbed-manager] 2026-02-04 04:09:18.492096 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:09:18.492100 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:09:18.492104 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:09:18.492108 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:09:18.492112 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:09:18.492115 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:09:18.492119 | orchestrator | 2026-02-04 04:09:18.492123 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-04 04:09:18.492127 | orchestrator | Wednesday 04 February 2026 04:08:59 +0000 (0:00:34.270) 0:01:57.439 **** 2026-02-04 04:09:18.492133 | orchestrator | ok: [testbed-manager] 2026-02-04 04:09:18.492140 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:09:18.492146 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:09:18.492152 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:09:18.492158 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:09:18.492165 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:09:18.492170 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:09:18.492176 | orchestrator | 2026-02-04 04:09:18.492182 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-04 04:09:18.492187 | orchestrator | Wednesday 04 February 2026 04:09:01 +0000 (0:00:02.093) 0:01:59.532 **** 2026-02-04 04:09:18.492222 | orchestrator | changed: [testbed-manager] 2026-02-04 04:09:18.492230 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:09:18.492236 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:09:18.492243 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:09:18.492249 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:09:18.492255 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:09:18.492261 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:09:18.492264 | orchestrator | 2026-02-04 04:09:18.492268 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:09:18.492274 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:09:18.492279 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:09:18.492283 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:09:18.492297 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:09:18.492301 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:09:18.492305 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:09:18.492308 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:09:18.492312 | orchestrator | 2026-02-04 04:09:18.492316 | orchestrator | 2026-02-04 04:09:18.492320 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:09:18.492323 | orchestrator | Wednesday 04 February 2026 04:09:17 +0000 (0:00:16.615) 0:02:16.148 **** 2026-02-04 04:09:18.492327 | orchestrator | =============================================================================== 2026-02-04 04:09:18.492331 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.27s 2026-02-04 04:09:18.492335 | orchestrator | common : Restart fluentd container ------------------------------------- 33.61s 2026-02-04 04:09:18.492339 | orchestrator | common : Restart cron container ---------------------------------------- 16.62s 2026-02-04 04:09:18.492342 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.30s 2026-02-04 04:09:18.492346 | orchestrator | common : Copying over config.json files for services -------------------- 3.98s 2026-02-04 04:09:18.492350 | orchestrator | service-check-containers : common | Check containers -------------------- 3.14s 2026-02-04 04:09:18.492353 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.90s 2026-02-04 04:09:18.492357 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.60s 2026-02-04 04:09:18.492361 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.35s 2026-02-04 04:09:18.492376 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.19s 2026-02-04 04:09:18.492380 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.18s 2026-02-04 04:09:18.492383 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.16s 2026-02-04 04:09:18.492387 | orchestrator | common : include_tasks -------------------------------------------------- 2.14s 2026-02-04 04:09:18.492391 | orchestrator | common : include_tasks -------------------------------------------------- 2.10s 2026-02-04 04:09:18.492394 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.09s 2026-02-04 04:09:18.492398 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.04s 2026-02-04 04:09:18.492406 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.98s 2026-02-04 04:09:18.492409 | orchestrator | common : Copying over kolla.target -------------------------------------- 1.88s 2026-02-04 04:09:18.492413 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.77s 2026-02-04 04:09:18.492417 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.68s 2026-02-04 04:09:18.828473 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-04 04:09:21.022922 | orchestrator | 2026-02-04 04:09:21 | INFO  | Task 66780c06-d08c-4d64-9970-d0a43ffb054e (loadbalancer) was prepared for execution. 2026-02-04 04:09:21.023025 | orchestrator | 2026-02-04 04:09:21 | INFO  | It takes a moment until task 66780c06-d08c-4d64-9970-d0a43ffb054e (loadbalancer) has been started and output is visible here. 2026-02-04 04:09:55.578856 | orchestrator | 2026-02-04 04:09:55.578991 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 04:09:55.579021 | orchestrator | 2026-02-04 04:09:55.579083 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 04:09:55.579095 | orchestrator | Wednesday 04 February 2026 04:09:27 +0000 (0:00:01.659) 0:00:01.659 **** 2026-02-04 04:09:55.579107 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:09:55.579119 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:09:55.579130 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:09:55.579141 | orchestrator | 2026-02-04 04:09:55.579152 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 04:09:55.579164 | orchestrator | Wednesday 04 February 2026 04:09:28 +0000 (0:00:01.714) 0:00:03.374 **** 2026-02-04 04:09:55.579175 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-04 04:09:55.579186 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-04 04:09:55.579197 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-04 04:09:55.579208 | orchestrator | 2026-02-04 04:09:55.579219 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-04 04:09:55.579230 | orchestrator | 2026-02-04 04:09:55.579262 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-04 04:09:55.579274 | orchestrator | Wednesday 04 February 2026 04:09:30 +0000 (0:00:01.891) 0:00:05.266 **** 2026-02-04 04:09:55.579286 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:09:55.579297 | orchestrator | 2026-02-04 04:09:55.579308 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-04 04:09:55.579319 | orchestrator | Wednesday 04 February 2026 04:09:33 +0000 (0:00:02.910) 0:00:08.177 **** 2026-02-04 04:09:55.579330 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:09:55.579342 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:09:55.579355 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:09:55.579384 | orchestrator | 2026-02-04 04:09:55.579399 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-04 04:09:55.579414 | orchestrator | Wednesday 04 February 2026 04:09:35 +0000 (0:00:02.002) 0:00:10.179 **** 2026-02-04 04:09:55.579427 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:09:55.579440 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:09:55.579452 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:09:55.579465 | orchestrator | 2026-02-04 04:09:55.579478 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-04 04:09:55.579490 | orchestrator | Wednesday 04 February 2026 04:09:37 +0000 (0:00:02.148) 0:00:12.328 **** 2026-02-04 04:09:55.579503 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:09:55.579517 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:09:55.579530 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:09:55.579543 | orchestrator | 2026-02-04 04:09:55.579555 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-04 04:09:55.579569 | orchestrator | Wednesday 04 February 2026 04:09:39 +0000 (0:00:01.690) 0:00:14.018 **** 2026-02-04 04:09:55.579603 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:09:55.579616 | orchestrator | 2026-02-04 04:09:55.579629 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-04 04:09:55.579642 | orchestrator | Wednesday 04 February 2026 04:09:41 +0000 (0:00:02.002) 0:00:16.021 **** 2026-02-04 04:09:55.579655 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:09:55.579667 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:09:55.579681 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:09:55.579694 | orchestrator | 2026-02-04 04:09:55.579707 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-04 04:09:55.579718 | orchestrator | Wednesday 04 February 2026 04:09:43 +0000 (0:00:01.732) 0:00:17.753 **** 2026-02-04 04:09:55.579729 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 04:09:55.579752 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 04:09:55.579763 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-04 04:09:55.579774 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 04:09:55.579786 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 04:09:55.579796 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-04 04:09:55.579807 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 04:09:55.579819 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 04:09:55.579830 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-04 04:09:55.579840 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 04:09:55.579852 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 04:09:55.579863 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-04 04:09:55.579874 | orchestrator | 2026-02-04 04:09:55.579885 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 04:09:55.579896 | orchestrator | Wednesday 04 February 2026 04:09:46 +0000 (0:00:03.246) 0:00:20.999 **** 2026-02-04 04:09:55.579907 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-04 04:09:55.579918 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-04 04:09:55.579929 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-04 04:09:55.579940 | orchestrator | 2026-02-04 04:09:55.579959 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 04:09:55.580000 | orchestrator | Wednesday 04 February 2026 04:09:48 +0000 (0:00:01.976) 0:00:22.976 **** 2026-02-04 04:09:55.580022 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-04 04:09:55.580073 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-04 04:09:55.580092 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-04 04:09:55.580105 | orchestrator | 2026-02-04 04:09:55.580116 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 04:09:55.580127 | orchestrator | Wednesday 04 February 2026 04:09:50 +0000 (0:00:02.239) 0:00:25.216 **** 2026-02-04 04:09:55.580137 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-04 04:09:55.580148 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:09:55.580159 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-04 04:09:55.580169 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:09:55.580180 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-04 04:09:55.580191 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:09:55.580201 | orchestrator | 2026-02-04 04:09:55.580212 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-04 04:09:55.580238 | orchestrator | Wednesday 04 February 2026 04:09:52 +0000 (0:00:01.985) 0:00:27.202 **** 2026-02-04 04:09:55.580260 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 04:09:55.580279 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 04:09:55.580291 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 04:09:55.580302 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:09:55.580314 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:09:55.580335 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:06.681999 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:06.682201 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:06.682222 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:06.682235 | orchestrator | 2026-02-04 04:10:06.682262 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-04 04:10:06.682282 | orchestrator | Wednesday 04 February 2026 04:09:55 +0000 (0:00:02.758) 0:00:29.960 **** 2026-02-04 04:10:06.682294 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:10:06.682306 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:10:06.682317 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:10:06.682328 | orchestrator | 2026-02-04 04:10:06.682340 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-04 04:10:06.682351 | orchestrator | Wednesday 04 February 2026 04:09:57 +0000 (0:00:01.987) 0:00:31.948 **** 2026-02-04 04:10:06.682362 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-04 04:10:06.682373 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-04 04:10:06.682384 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-04 04:10:06.682395 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-04 04:10:06.682406 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-04 04:10:06.682417 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-04 04:10:06.682428 | orchestrator | 2026-02-04 04:10:06.682439 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-04 04:10:06.682450 | orchestrator | Wednesday 04 February 2026 04:10:00 +0000 (0:00:02.870) 0:00:34.819 **** 2026-02-04 04:10:06.682462 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:10:06.682473 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:10:06.682483 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:10:06.682494 | orchestrator | 2026-02-04 04:10:06.682505 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-04 04:10:06.682516 | orchestrator | Wednesday 04 February 2026 04:10:02 +0000 (0:00:02.290) 0:00:37.109 **** 2026-02-04 04:10:06.682527 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:10:06.682538 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:10:06.682549 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:10:06.682562 | orchestrator | 2026-02-04 04:10:06.682575 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-04 04:10:06.682588 | orchestrator | Wednesday 04 February 2026 04:10:04 +0000 (0:00:02.214) 0:00:39.324 **** 2026-02-04 04:10:06.682603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 04:10:06.682656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:10:06.682672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:06.682692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 04:10:06.682707 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:10:06.682721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 04:10:06.682735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:10:06.682757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:06.682770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 04:10:06.682783 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:10:06.682805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 04:10:10.830509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:10:10.830605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:10.830622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 04:10:10.830634 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:10:10.830667 | orchestrator | 2026-02-04 04:10:10.830679 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-04 04:10:10.830691 | orchestrator | Wednesday 04 February 2026 04:10:06 +0000 (0:00:01.741) 0:00:41.066 **** 2026-02-04 04:10:10.830703 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:10.830716 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:10.830727 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:10.830761 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:10.830774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:10.830785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 04:10:10.830803 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:10.830815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:10.830826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 04:10:10.830855 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:24.711461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:24.711581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6', '__omit_place_holder__16250bc7391a2644223b5554038364acbe9755e6'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-04 04:10:24.711625 | orchestrator | 2026-02-04 04:10:24.711641 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-04 04:10:24.711654 | orchestrator | Wednesday 04 February 2026 04:10:10 +0000 (0:00:04.149) 0:00:45.216 **** 2026-02-04 04:10:24.711666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:24.711679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:24.711691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:24.711717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:24.711749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:24.711762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:24.711782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:24.711794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:24.711805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:24.711816 | orchestrator | 2026-02-04 04:10:24.711827 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-04 04:10:24.711838 | orchestrator | Wednesday 04 February 2026 04:10:15 +0000 (0:00:04.885) 0:00:50.102 **** 2026-02-04 04:10:24.711850 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 04:10:24.711861 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 04:10:24.711872 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-04 04:10:24.711883 | orchestrator | 2026-02-04 04:10:24.711894 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-04 04:10:24.711905 | orchestrator | Wednesday 04 February 2026 04:10:18 +0000 (0:00:02.807) 0:00:52.910 **** 2026-02-04 04:10:24.711916 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 04:10:24.711927 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 04:10:24.711938 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-04 04:10:24.711948 | orchestrator | 2026-02-04 04:10:24.711961 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-04 04:10:24.711974 | orchestrator | Wednesday 04 February 2026 04:10:22 +0000 (0:00:04.290) 0:00:57.201 **** 2026-02-04 04:10:24.711987 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:10:24.712025 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:10:24.712055 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:10:45.189787 | orchestrator | 2026-02-04 04:10:45.189903 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-04 04:10:45.189919 | orchestrator | Wednesday 04 February 2026 04:10:24 +0000 (0:00:01.895) 0:00:59.097 **** 2026-02-04 04:10:45.189932 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 04:10:45.189965 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 04:10:45.189977 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-04 04:10:45.190098 | orchestrator | 2026-02-04 04:10:45.190113 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-04 04:10:45.190124 | orchestrator | Wednesday 04 February 2026 04:10:27 +0000 (0:00:03.020) 0:01:02.118 **** 2026-02-04 04:10:45.190135 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 04:10:45.190147 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 04:10:45.190159 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-04 04:10:45.190280 | orchestrator | 2026-02-04 04:10:45.190297 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-04 04:10:45.190308 | orchestrator | Wednesday 04 February 2026 04:10:30 +0000 (0:00:02.766) 0:01:04.884 **** 2026-02-04 04:10:45.190368 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:10:45.190383 | orchestrator | 2026-02-04 04:10:45.190396 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-04 04:10:45.190409 | orchestrator | Wednesday 04 February 2026 04:10:32 +0000 (0:00:01.972) 0:01:06.857 **** 2026-02-04 04:10:45.190422 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-04 04:10:45.190436 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-04 04:10:45.190448 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-04 04:10:45.190461 | orchestrator | 2026-02-04 04:10:45.190474 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-04 04:10:45.190487 | orchestrator | Wednesday 04 February 2026 04:10:35 +0000 (0:00:02.578) 0:01:09.435 **** 2026-02-04 04:10:45.190499 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-04 04:10:45.190513 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-04 04:10:45.190525 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-04 04:10:45.190538 | orchestrator | 2026-02-04 04:10:45.190550 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-04 04:10:45.190563 | orchestrator | Wednesday 04 February 2026 04:10:37 +0000 (0:00:02.651) 0:01:12.087 **** 2026-02-04 04:10:45.190576 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:10:45.190589 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:10:45.190603 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:10:45.190615 | orchestrator | 2026-02-04 04:10:45.190627 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-04 04:10:45.190638 | orchestrator | Wednesday 04 February 2026 04:10:39 +0000 (0:00:01.392) 0:01:13.479 **** 2026-02-04 04:10:45.190648 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:10:45.190659 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:10:45.190670 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:10:45.190680 | orchestrator | 2026-02-04 04:10:45.190691 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-04 04:10:45.190702 | orchestrator | Wednesday 04 February 2026 04:10:41 +0000 (0:00:01.932) 0:01:15.411 **** 2026-02-04 04:10:45.190717 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:45.190750 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:45.190783 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 04:10:45.190796 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:45.190808 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:45.190819 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:10:45.190831 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:45.190851 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:45.190870 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:10:49.123754 | orchestrator | 2026-02-04 04:10:49.123859 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-04 04:10:49.123876 | orchestrator | Wednesday 04 February 2026 04:10:45 +0000 (0:00:04.158) 0:01:19.570 **** 2026-02-04 04:10:49.123892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 04:10:49.123907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:10:49.123920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:49.123932 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:10:49.123944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 04:10:49.124033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:10:49.124062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:49.124074 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:10:49.124105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 04:10:49.124117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:10:49.124129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:49.124140 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:10:49.124151 | orchestrator | 2026-02-04 04:10:49.124163 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-04 04:10:49.124175 | orchestrator | Wednesday 04 February 2026 04:10:46 +0000 (0:00:01.719) 0:01:21.289 **** 2026-02-04 04:10:49.124186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 04:10:49.124206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:10:49.124217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:10:49.124229 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:10:49.124253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 04:11:01.560261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:11:01.560379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:11:01.560397 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:01.560412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 04:11:01.560446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:11:01.560459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:11:01.560471 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:01.560482 | orchestrator | 2026-02-04 04:11:01.560494 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-04 04:11:01.560507 | orchestrator | Wednesday 04 February 2026 04:10:49 +0000 (0:00:02.222) 0:01:23.511 **** 2026-02-04 04:11:01.560518 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 04:11:01.560530 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 04:11:01.560556 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-04 04:11:01.560567 | orchestrator | 2026-02-04 04:11:01.560578 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-04 04:11:01.560589 | orchestrator | Wednesday 04 February 2026 04:10:51 +0000 (0:00:02.617) 0:01:26.129 **** 2026-02-04 04:11:01.560600 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 04:11:01.560611 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 04:11:01.560622 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-04 04:11:01.560632 | orchestrator | 2026-02-04 04:11:01.560661 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-04 04:11:01.560673 | orchestrator | Wednesday 04 February 2026 04:10:55 +0000 (0:00:03.355) 0:01:29.485 **** 2026-02-04 04:11:01.560684 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 04:11:01.560695 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 04:11:01.560706 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-04 04:11:01.560717 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 04:11:01.560728 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:01.560739 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 04:11:01.560750 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:01.560761 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-04 04:11:01.560772 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:01.560783 | orchestrator | 2026-02-04 04:11:01.560796 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-04 04:11:01.560819 | orchestrator | Wednesday 04 February 2026 04:10:57 +0000 (0:00:02.516) 0:01:32.001 **** 2026-02-04 04:11:01.560834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 04:11:01.560848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 04:11:01.560860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 04:11:01.560876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:11:01.560905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:11:05.479268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:11:05.479414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:11:05.479432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:11:05.479444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:11:05.479455 | orchestrator | 2026-02-04 04:11:05.479466 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-04 04:11:05.479477 | orchestrator | Wednesday 04 February 2026 04:11:01 +0000 (0:00:03.936) 0:01:35.938 **** 2026-02-04 04:11:05.479488 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:11:05.479498 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:11:05.479508 | orchestrator | } 2026-02-04 04:11:05.479517 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:11:05.479527 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:11:05.479537 | orchestrator | } 2026-02-04 04:11:05.479547 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:11:05.479556 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:11:05.479566 | orchestrator | } 2026-02-04 04:11:05.479576 | orchestrator | 2026-02-04 04:11:05.479585 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:11:05.479602 | orchestrator | Wednesday 04 February 2026 04:11:02 +0000 (0:00:01.434) 0:01:37.373 **** 2026-02-04 04:11:05.479620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 04:11:05.479659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:11:05.479709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:11:05.479727 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:05.479738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 04:11:05.479749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:11:05.479759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:11:05.479769 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:05.479779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 04:11:05.479794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:11:05.479822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:11:11.049867 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:11.049963 | orchestrator | 2026-02-04 04:11:11.050058 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-04 04:11:11.050067 | orchestrator | Wednesday 04 February 2026 04:11:05 +0000 (0:00:02.485) 0:01:39.859 **** 2026-02-04 04:11:11.050074 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:11:11.050081 | orchestrator | 2026-02-04 04:11:11.050088 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-04 04:11:11.050095 | orchestrator | Wednesday 04 February 2026 04:11:07 +0000 (0:00:02.019) 0:01:41.879 **** 2026-02-04 04:11:11.050107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:11:11.050118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 04:11:11.050126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:11.050165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 04:11:11.050201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:11:11.050209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 04:11:11.050217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:11.050224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 04:11:11.050234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:11:11.050248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 04:11:11.050258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:12.765958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 04:11:12.766172 | orchestrator | 2026-02-04 04:11:12.766197 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-04 04:11:12.766215 | orchestrator | Wednesday 04 February 2026 04:11:12 +0000 (0:00:04.658) 0:01:46.537 **** 2026-02-04 04:11:12.766233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:11:12.766254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 04:11:12.766289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:12.766327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 04:11:12.766344 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:12.766382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:11:12.766401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 04:11:12.766417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:12.766434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 04:11:12.766459 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:12.766482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:11:12.766500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-04 04:11:12.766526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:27.889687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-04 04:11:27.889808 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:27.889826 | orchestrator | 2026-02-04 04:11:27.889839 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-04 04:11:27.889852 | orchestrator | Wednesday 04 February 2026 04:11:13 +0000 (0:00:01.679) 0:01:48.217 **** 2026-02-04 04:11:27.889865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:27.889880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:27.889893 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:27.889905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:27.889938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:27.889950 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:27.890094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:27.890123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:27.890135 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:27.890146 | orchestrator | 2026-02-04 04:11:27.890157 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-04 04:11:27.890168 | orchestrator | Wednesday 04 February 2026 04:11:16 +0000 (0:00:02.388) 0:01:50.605 **** 2026-02-04 04:11:27.890180 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:11:27.890192 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:11:27.890204 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:11:27.890217 | orchestrator | 2026-02-04 04:11:27.890230 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-04 04:11:27.890243 | orchestrator | Wednesday 04 February 2026 04:11:18 +0000 (0:00:02.311) 0:01:52.917 **** 2026-02-04 04:11:27.890255 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:11:27.890268 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:11:27.890281 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:11:27.890294 | orchestrator | 2026-02-04 04:11:27.890308 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-04 04:11:27.890330 | orchestrator | Wednesday 04 February 2026 04:11:21 +0000 (0:00:03.020) 0:01:55.938 **** 2026-02-04 04:11:27.890348 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:11:27.890366 | orchestrator | 2026-02-04 04:11:27.890389 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-04 04:11:27.890407 | orchestrator | Wednesday 04 February 2026 04:11:23 +0000 (0:00:01.663) 0:01:57.601 **** 2026-02-04 04:11:27.890458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:11:27.890484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:27.890527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:11:27.890554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:11:27.890567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:27.890579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:11:27.890601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:11:29.564534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:29.564667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:11:29.564687 | orchestrator | 2026-02-04 04:11:29.564700 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-04 04:11:29.564713 | orchestrator | Wednesday 04 February 2026 04:11:27 +0000 (0:00:04.669) 0:02:02.271 **** 2026-02-04 04:11:29.564749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:11:29.564764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:29.564776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:11:29.564810 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:29.564845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:11:29.564864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:29.564877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:11:29.564888 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:29.564900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:11:29.564912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-04 04:11:29.564939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:11:45.932395 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:45.932511 | orchestrator | 2026-02-04 04:11:45.932525 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-04 04:11:45.932536 | orchestrator | Wednesday 04 February 2026 04:11:29 +0000 (0:00:01.678) 0:02:03.950 **** 2026-02-04 04:11:45.932545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:45.932558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:45.932568 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:45.932592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:45.932602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:45.932611 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:45.932620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:45.932629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:11:45.932638 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:45.932647 | orchestrator | 2026-02-04 04:11:45.932656 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-04 04:11:45.932664 | orchestrator | Wednesday 04 February 2026 04:11:31 +0000 (0:00:01.852) 0:02:05.803 **** 2026-02-04 04:11:45.932673 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:11:45.932683 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:11:45.932691 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:11:45.932700 | orchestrator | 2026-02-04 04:11:45.932709 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-04 04:11:45.932740 | orchestrator | Wednesday 04 February 2026 04:11:33 +0000 (0:00:02.284) 0:02:08.087 **** 2026-02-04 04:11:45.932749 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:11:45.932758 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:11:45.932766 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:11:45.932775 | orchestrator | 2026-02-04 04:11:45.932783 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-04 04:11:45.932793 | orchestrator | Wednesday 04 February 2026 04:11:36 +0000 (0:00:02.830) 0:02:10.918 **** 2026-02-04 04:11:45.932801 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:45.932810 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:45.932819 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:45.932827 | orchestrator | 2026-02-04 04:11:45.932836 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-04 04:11:45.932844 | orchestrator | Wednesday 04 February 2026 04:11:37 +0000 (0:00:01.374) 0:02:12.293 **** 2026-02-04 04:11:45.932853 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:11:45.932862 | orchestrator | 2026-02-04 04:11:45.932870 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-04 04:11:45.932879 | orchestrator | Wednesday 04 February 2026 04:11:39 +0000 (0:00:01.733) 0:02:14.026 **** 2026-02-04 04:11:45.932889 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 04:11:45.932918 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 04:11:45.932935 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-04 04:11:45.932972 | orchestrator | 2026-02-04 04:11:45.932984 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-04 04:11:45.933002 | orchestrator | Wednesday 04 February 2026 04:11:43 +0000 (0:00:03.626) 0:02:17.653 **** 2026-02-04 04:11:45.933013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 04:11:45.933024 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:45.933034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 04:11:45.933044 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:45.933062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-04 04:11:58.357177 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:58.357286 | orchestrator | 2026-02-04 04:11:58.357302 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-04 04:11:58.357314 | orchestrator | Wednesday 04 February 2026 04:11:45 +0000 (0:00:02.662) 0:02:20.316 **** 2026-02-04 04:11:58.357327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 04:11:58.357355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 04:11:58.357386 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:58.357397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 04:11:58.357407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 04:11:58.357417 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:58.357427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 04:11:58.357441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-04 04:11:58.357459 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:58.357476 | orchestrator | 2026-02-04 04:11:58.357494 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-04 04:11:58.357511 | orchestrator | Wednesday 04 February 2026 04:11:48 +0000 (0:00:02.885) 0:02:23.201 **** 2026-02-04 04:11:58.357526 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:58.357536 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:58.357546 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:58.357555 | orchestrator | 2026-02-04 04:11:58.357565 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-04 04:11:58.357574 | orchestrator | Wednesday 04 February 2026 04:11:50 +0000 (0:00:01.470) 0:02:24.672 **** 2026-02-04 04:11:58.357584 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:11:58.357593 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:11:58.357603 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:11:58.357612 | orchestrator | 2026-02-04 04:11:58.357621 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-04 04:11:58.357631 | orchestrator | Wednesday 04 February 2026 04:11:52 +0000 (0:00:02.378) 0:02:27.050 **** 2026-02-04 04:11:58.357641 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:11:58.357650 | orchestrator | 2026-02-04 04:11:58.357660 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-04 04:11:58.357669 | orchestrator | Wednesday 04 February 2026 04:11:54 +0000 (0:00:01.882) 0:02:28.933 **** 2026-02-04 04:11:58.357700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:11:58.357730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:11:58.357750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:11:58.357764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:11:58.357777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 04:11:58.357798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.434921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.435071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.435089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:12:00.435104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.435116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.435178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.435192 | orchestrator | 2026-02-04 04:12:00.435206 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-04 04:12:00.435218 | orchestrator | Wednesday 04 February 2026 04:11:59 +0000 (0:00:04.932) 0:02:33.866 **** 2026-02-04 04:12:00.435230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:12:00.435243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.435255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.435266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 04:12:00.435286 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:00.435312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:12:11.732904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:12:11.733043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 04:12:11.733056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 04:12:11.733066 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:11.733077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:12:11.733106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:12:11.733143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-04 04:12:11.733153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-04 04:12:11.733160 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:11.733169 | orchestrator | 2026-02-04 04:12:11.733177 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-04 04:12:11.733186 | orchestrator | Wednesday 04 February 2026 04:12:01 +0000 (0:00:02.064) 0:02:35.930 **** 2026-02-04 04:12:11.733194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:11.733204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:11.733213 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:11.733220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:11.733234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:11.733242 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:11.733250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:11.733257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:11.733264 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:11.733272 | orchestrator | 2026-02-04 04:12:11.733279 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-04 04:12:11.733287 | orchestrator | Wednesday 04 February 2026 04:12:03 +0000 (0:00:02.063) 0:02:37.993 **** 2026-02-04 04:12:11.733294 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:12:11.733302 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:12:11.733309 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:12:11.733316 | orchestrator | 2026-02-04 04:12:11.733323 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-04 04:12:11.733331 | orchestrator | Wednesday 04 February 2026 04:12:05 +0000 (0:00:02.223) 0:02:40.217 **** 2026-02-04 04:12:11.733338 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:12:11.733345 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:12:11.733352 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:12:11.733359 | orchestrator | 2026-02-04 04:12:11.733370 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-04 04:12:11.733378 | orchestrator | Wednesday 04 February 2026 04:12:08 +0000 (0:00:02.908) 0:02:43.125 **** 2026-02-04 04:12:11.733385 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:11.733392 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:11.733399 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:11.733407 | orchestrator | 2026-02-04 04:12:11.733414 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-04 04:12:11.733421 | orchestrator | Wednesday 04 February 2026 04:12:10 +0000 (0:00:01.630) 0:02:44.755 **** 2026-02-04 04:12:11.733429 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:11.733436 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:11.733448 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:17.318544 | orchestrator | 2026-02-04 04:12:17.318654 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-04 04:12:17.318672 | orchestrator | Wednesday 04 February 2026 04:12:11 +0000 (0:00:01.363) 0:02:46.119 **** 2026-02-04 04:12:17.318683 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:12:17.318695 | orchestrator | 2026-02-04 04:12:17.318706 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-04 04:12:17.318717 | orchestrator | Wednesday 04 February 2026 04:12:13 +0000 (0:00:01.808) 0:02:47.928 **** 2026-02-04 04:12:17.318734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:12:17.318775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 04:12:17.318790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 04:12:17.318803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 04:12:17.318829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 04:12:17.318860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:12:17.318872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 04:12:17.318892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:12:17.318904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 04:12:17.318992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:12:17.319016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.468988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 04:12:19.469080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 04:12:19.469159 | orchestrator | 2026-02-04 04:12:19.469165 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-04 04:12:19.469171 | orchestrator | Wednesday 04 February 2026 04:12:18 +0000 (0:00:05.072) 0:02:53.000 **** 2026-02-04 04:12:19.469180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:12:19.469191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 04:12:20.738440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 04:12:20.738537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 04:12:20.738553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 04:12:20.738565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:12:20.738575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 04:12:20.738602 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:20.738630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:12:20.738684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 04:12:20.738704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 04:12:20.738721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:12:20.738761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 04:12:20.738784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-04 04:12:20.738826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 04:12:35.797195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-04 04:12:35.797318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:12:35.797336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-04 04:12:35.797350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 04:12:35.797378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-04 04:12:35.797413 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:35.797427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:12:35.797458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-04 04:12:35.797471 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:35.797483 | orchestrator | 2026-02-04 04:12:35.797495 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-04 04:12:35.797507 | orchestrator | Wednesday 04 February 2026 04:12:20 +0000 (0:00:02.127) 0:02:55.128 **** 2026-02-04 04:12:35.797520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:35.797534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:35.797547 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:35.797558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:35.797569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:35.797581 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:35.797592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:35.797603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:12:35.797614 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:35.797625 | orchestrator | 2026-02-04 04:12:35.797636 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-04 04:12:35.797647 | orchestrator | Wednesday 04 February 2026 04:12:22 +0000 (0:00:02.132) 0:02:57.260 **** 2026-02-04 04:12:35.797659 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:12:35.797670 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:12:35.797681 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:12:35.797692 | orchestrator | 2026-02-04 04:12:35.797705 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-04 04:12:35.797727 | orchestrator | Wednesday 04 February 2026 04:12:25 +0000 (0:00:02.236) 0:02:59.496 **** 2026-02-04 04:12:35.797740 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:12:35.797752 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:12:35.797765 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:12:35.797777 | orchestrator | 2026-02-04 04:12:35.797791 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-04 04:12:35.797804 | orchestrator | Wednesday 04 February 2026 04:12:28 +0000 (0:00:02.922) 0:03:02.419 **** 2026-02-04 04:12:35.797817 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:35.797830 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:35.797847 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:35.797860 | orchestrator | 2026-02-04 04:12:35.797874 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-04 04:12:35.797888 | orchestrator | Wednesday 04 February 2026 04:12:29 +0000 (0:00:01.365) 0:03:03.785 **** 2026-02-04 04:12:35.797901 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:12:35.797962 | orchestrator | 2026-02-04 04:12:35.797976 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-04 04:12:35.797988 | orchestrator | Wednesday 04 February 2026 04:12:31 +0000 (0:00:01.836) 0:03:05.621 **** 2026-02-04 04:12:35.798070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 04:12:36.911289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 04:12:36.911459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 04:12:36.911501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 04:12:36.911529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-04 04:12:36.911551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 04:12:40.419453 | orchestrator | 2026-02-04 04:12:40.419555 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-04 04:12:40.419571 | orchestrator | Wednesday 04 February 2026 04:12:36 +0000 (0:00:05.682) 0:03:11.303 **** 2026-02-04 04:12:40.419605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 04:12:40.419623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 04:12:40.419657 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:40.419695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 04:12:40.419710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 04:12:40.419730 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:40.419763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-04 04:12:59.107605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-04 04:12:59.107778 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:59.107807 | orchestrator | 2026-02-04 04:12:59.107868 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-04 04:12:59.107891 | orchestrator | Wednesday 04 February 2026 04:12:41 +0000 (0:00:04.626) 0:03:15.930 **** 2026-02-04 04:12:59.108001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 04:12:59.108025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 04:12:59.108046 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:59.108086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 04:12:59.108137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 04:12:59.108160 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:59.108182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 04:12:59.108205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-04 04:12:59.108225 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:59.108245 | orchestrator | 2026-02-04 04:12:59.108265 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-04 04:12:59.108282 | orchestrator | Wednesday 04 February 2026 04:12:46 +0000 (0:00:04.658) 0:03:20.589 **** 2026-02-04 04:12:59.108300 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:12:59.108336 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:12:59.108355 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:12:59.108373 | orchestrator | 2026-02-04 04:12:59.108392 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-04 04:12:59.108405 | orchestrator | Wednesday 04 February 2026 04:12:48 +0000 (0:00:02.336) 0:03:22.925 **** 2026-02-04 04:12:59.108416 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:12:59.108427 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:12:59.108438 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:12:59.108449 | orchestrator | 2026-02-04 04:12:59.108460 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-04 04:12:59.108470 | orchestrator | Wednesday 04 February 2026 04:12:51 +0000 (0:00:02.788) 0:03:25.713 **** 2026-02-04 04:12:59.108481 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:12:59.108492 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:12:59.108503 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:12:59.108513 | orchestrator | 2026-02-04 04:12:59.108524 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-04 04:12:59.108535 | orchestrator | Wednesday 04 February 2026 04:12:52 +0000 (0:00:01.347) 0:03:27.061 **** 2026-02-04 04:12:59.108545 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:12:59.108556 | orchestrator | 2026-02-04 04:12:59.108567 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-04 04:12:59.108577 | orchestrator | Wednesday 04 February 2026 04:12:54 +0000 (0:00:01.790) 0:03:28.851 **** 2026-02-04 04:12:59.108589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:12:59.108621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:13:16.198984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:13:16.199128 | orchestrator | 2026-02-04 04:13:16.199187 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-04 04:13:16.199208 | orchestrator | Wednesday 04 February 2026 04:12:59 +0000 (0:00:04.645) 0:03:33.497 **** 2026-02-04 04:13:16.199229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:13:16.199249 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:16.199270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:13:16.199289 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:16.199307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:13:16.199326 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:16.199344 | orchestrator | 2026-02-04 04:13:16.199380 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-04 04:13:16.199400 | orchestrator | Wednesday 04 February 2026 04:13:00 +0000 (0:00:01.726) 0:03:35.223 **** 2026-02-04 04:13:16.199421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:13:16.199444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:13:16.199468 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:16.199523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:13:16.199546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:13:16.199584 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:16.199603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:13:16.199623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:13:16.199644 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:16.199663 | orchestrator | 2026-02-04 04:13:16.199682 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-04 04:13:16.199701 | orchestrator | Wednesday 04 February 2026 04:13:02 +0000 (0:00:01.515) 0:03:36.739 **** 2026-02-04 04:13:16.199719 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:13:16.199740 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:13:16.199761 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:13:16.199781 | orchestrator | 2026-02-04 04:13:16.199800 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-04 04:13:16.199819 | orchestrator | Wednesday 04 February 2026 04:13:04 +0000 (0:00:02.220) 0:03:38.959 **** 2026-02-04 04:13:16.199837 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:13:16.199854 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:13:16.199872 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:13:16.199922 | orchestrator | 2026-02-04 04:13:16.199953 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-04 04:13:16.199972 | orchestrator | Wednesday 04 February 2026 04:13:07 +0000 (0:00:03.076) 0:03:42.035 **** 2026-02-04 04:13:16.199991 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:16.200010 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:16.200028 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:16.200046 | orchestrator | 2026-02-04 04:13:16.200065 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-04 04:13:16.200084 | orchestrator | Wednesday 04 February 2026 04:13:09 +0000 (0:00:01.420) 0:03:43.456 **** 2026-02-04 04:13:16.200103 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:13:16.200121 | orchestrator | 2026-02-04 04:13:16.200136 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-04 04:13:16.200147 | orchestrator | Wednesday 04 February 2026 04:13:10 +0000 (0:00:01.849) 0:03:45.305 **** 2026-02-04 04:13:16.200189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 04:13:18.053739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 04:13:18.053828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-04 04:13:18.053849 | orchestrator | 2026-02-04 04:13:18.053862 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-04 04:13:18.053868 | orchestrator | Wednesday 04 February 2026 04:13:16 +0000 (0:00:05.283) 0:03:50.589 **** 2026-02-04 04:13:18.053874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 04:13:18.053879 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:18.053931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 04:13:27.894001 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:27.894191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-04 04:13:27.894235 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:27.894247 | orchestrator | 2026-02-04 04:13:27.894260 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-04 04:13:27.894273 | orchestrator | Wednesday 04 February 2026 04:13:18 +0000 (0:00:01.856) 0:03:52.445 **** 2026-02-04 04:13:27.894286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-04 04:13:27.894301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 04:13:27.894315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-04 04:13:27.894328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 04:13:27.894340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 04:13:27.894353 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:27.894383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-04 04:13:27.894412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 04:13:27.894425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-04 04:13:27.894436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 04:13:27.894448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 04:13:27.894467 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:27.894479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-04 04:13:27.894495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 04:13:27.894507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-04 04:13:27.894518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-04 04:13:27.894529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-04 04:13:27.894540 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:27.894551 | orchestrator | 2026-02-04 04:13:27.894563 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-04 04:13:27.894574 | orchestrator | Wednesday 04 February 2026 04:13:20 +0000 (0:00:02.056) 0:03:54.501 **** 2026-02-04 04:13:27.894585 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:13:27.894597 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:13:27.894607 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:13:27.894618 | orchestrator | 2026-02-04 04:13:27.894629 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-04 04:13:27.894640 | orchestrator | Wednesday 04 February 2026 04:13:22 +0000 (0:00:02.288) 0:03:56.790 **** 2026-02-04 04:13:27.894651 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:13:27.894661 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:13:27.894672 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:13:27.894683 | orchestrator | 2026-02-04 04:13:27.894694 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-04 04:13:27.894705 | orchestrator | Wednesday 04 February 2026 04:13:26 +0000 (0:00:03.702) 0:04:00.493 **** 2026-02-04 04:13:27.894716 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:27.894726 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:27.894737 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:27.894748 | orchestrator | 2026-02-04 04:13:27.894759 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-04 04:13:27.894771 | orchestrator | Wednesday 04 February 2026 04:13:27 +0000 (0:00:01.574) 0:04:02.067 **** 2026-02-04 04:13:27.894788 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:37.980384 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:37.980497 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:37.980513 | orchestrator | 2026-02-04 04:13:37.980526 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-04 04:13:37.980539 | orchestrator | Wednesday 04 February 2026 04:13:29 +0000 (0:00:01.334) 0:04:03.402 **** 2026-02-04 04:13:37.980550 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:13:37.980561 | orchestrator | 2026-02-04 04:13:37.980572 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-04 04:13:37.980607 | orchestrator | Wednesday 04 February 2026 04:13:31 +0000 (0:00:02.098) 0:04:05.501 **** 2026-02-04 04:13:37.980625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-04 04:13:37.980657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-04 04:13:37.980670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 04:13:37.980684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 04:13:37.980714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 04:13:37.980734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 04:13:37.980762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-04 04:13:37.980784 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 04:13:37.980803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 04:13:37.980824 | orchestrator | 2026-02-04 04:13:37.980844 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-04 04:13:37.980865 | orchestrator | Wednesday 04 February 2026 04:13:35 +0000 (0:00:04.830) 0:04:10.331 **** 2026-02-04 04:13:37.980980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-04 04:13:39.626181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 04:13:39.626260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 04:13:39.626269 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:39.626289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-04 04:13:39.626295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 04:13:39.626301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 04:13:39.626319 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:39.626336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-04 04:13:39.626341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-04 04:13:39.626350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-04 04:13:39.626355 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:39.626360 | orchestrator | 2026-02-04 04:13:39.626365 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-04 04:13:39.626371 | orchestrator | Wednesday 04 February 2026 04:13:37 +0000 (0:00:02.036) 0:04:12.368 **** 2026-02-04 04:13:39.626378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-04 04:13:39.626386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-04 04:13:39.626392 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:39.626397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-04 04:13:39.626406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-04 04:13:39.626411 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:39.626416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-04 04:13:39.626421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-04 04:13:39.626426 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:39.626431 | orchestrator | 2026-02-04 04:13:39.626436 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-04 04:13:39.626444 | orchestrator | Wednesday 04 February 2026 04:13:39 +0000 (0:00:01.641) 0:04:14.009 **** 2026-02-04 04:13:55.266584 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:13:55.266666 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:13:55.266674 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:13:55.266679 | orchestrator | 2026-02-04 04:13:55.266685 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-04 04:13:55.266691 | orchestrator | Wednesday 04 February 2026 04:13:41 +0000 (0:00:02.209) 0:04:16.219 **** 2026-02-04 04:13:55.266695 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:13:55.266700 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:13:55.266704 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:13:55.266709 | orchestrator | 2026-02-04 04:13:55.266714 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-04 04:13:55.266718 | orchestrator | Wednesday 04 February 2026 04:13:45 +0000 (0:00:03.272) 0:04:19.492 **** 2026-02-04 04:13:55.266723 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:13:55.266728 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:13:55.266732 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:13:55.266737 | orchestrator | 2026-02-04 04:13:55.266742 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-04 04:13:55.266746 | orchestrator | Wednesday 04 February 2026 04:13:46 +0000 (0:00:01.443) 0:04:20.936 **** 2026-02-04 04:13:55.266751 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:13:55.266755 | orchestrator | 2026-02-04 04:13:55.266760 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-04 04:13:55.266764 | orchestrator | Wednesday 04 February 2026 04:13:48 +0000 (0:00:01.857) 0:04:22.794 **** 2026-02-04 04:13:55.266783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:13:55.266809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:13:55.266816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:13:55.266831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:13:55.266839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:13:55.266844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:13:55.266856 | orchestrator | 2026-02-04 04:13:55.266893 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-04 04:13:55.266899 | orchestrator | Wednesday 04 February 2026 04:13:53 +0000 (0:00:05.131) 0:04:27.925 **** 2026-02-04 04:13:55.266904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:13:55.266913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:14:08.484199 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:08.484315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:14:08.484354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:14:08.484382 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:08.484390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:14:08.484398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:14:08.484405 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:08.484413 | orchestrator | 2026-02-04 04:14:08.484421 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-04 04:14:08.484429 | orchestrator | Wednesday 04 February 2026 04:13:55 +0000 (0:00:01.730) 0:04:29.655 **** 2026-02-04 04:14:08.484448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:08.484459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:08.484468 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:08.484475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:08.484482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:08.484488 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:08.484495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:08.484511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:08.484518 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:08.484525 | orchestrator | 2026-02-04 04:14:08.484532 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-04 04:14:08.484538 | orchestrator | Wednesday 04 February 2026 04:13:57 +0000 (0:00:02.048) 0:04:31.704 **** 2026-02-04 04:14:08.484545 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:14:08.484552 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:14:08.484559 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:14:08.484566 | orchestrator | 2026-02-04 04:14:08.484572 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-04 04:14:08.484579 | orchestrator | Wednesday 04 February 2026 04:13:59 +0000 (0:00:02.266) 0:04:33.970 **** 2026-02-04 04:14:08.484585 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:14:08.484592 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:14:08.484599 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:14:08.484605 | orchestrator | 2026-02-04 04:14:08.484612 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-04 04:14:08.484619 | orchestrator | Wednesday 04 February 2026 04:14:02 +0000 (0:00:02.957) 0:04:36.928 **** 2026-02-04 04:14:08.484626 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:14:08.484632 | orchestrator | 2026-02-04 04:14:08.484639 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-04 04:14:08.484645 | orchestrator | Wednesday 04 February 2026 04:14:04 +0000 (0:00:02.118) 0:04:39.047 **** 2026-02-04 04:14:08.484653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:14:08.484661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:14:08.484675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 04:14:10.248801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 04:14:10.248980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:14:10.249000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:14:10.249014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 04:14:10.249027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 04:14:10.249058 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:14:10.249097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:14:10.249110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 04:14:10.249121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 04:14:10.249134 | orchestrator | 2026-02-04 04:14:10.249147 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-04 04:14:10.249159 | orchestrator | Wednesday 04 February 2026 04:14:09 +0000 (0:00:04.949) 0:04:43.997 **** 2026-02-04 04:14:10.249172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:14:10.249200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.345171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.345273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.345289 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:13.345305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:14:13.345318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.345330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.345380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.345394 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:13.345405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:14:13.346258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.346305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.346335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-04 04:14:13.346369 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:13.346388 | orchestrator | 2026-02-04 04:14:13.346405 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-04 04:14:13.346423 | orchestrator | Wednesday 04 February 2026 04:14:11 +0000 (0:00:01.768) 0:04:45.765 **** 2026-02-04 04:14:13.346442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:13.346462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:13.346482 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:13.346499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:13.346534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:29.015702 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:29.015817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:29.015838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:14:29.015904 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:29.015916 | orchestrator | 2026-02-04 04:14:29.015928 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-04 04:14:29.015940 | orchestrator | Wednesday 04 February 2026 04:14:13 +0000 (0:00:01.967) 0:04:47.733 **** 2026-02-04 04:14:29.015952 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:14:29.015964 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:14:29.015992 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:14:29.016003 | orchestrator | 2026-02-04 04:14:29.016015 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-04 04:14:29.016026 | orchestrator | Wednesday 04 February 2026 04:14:15 +0000 (0:00:02.288) 0:04:50.022 **** 2026-02-04 04:14:29.016037 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:14:29.016048 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:14:29.016059 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:14:29.016069 | orchestrator | 2026-02-04 04:14:29.016081 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-04 04:14:29.016091 | orchestrator | Wednesday 04 February 2026 04:14:18 +0000 (0:00:02.945) 0:04:52.967 **** 2026-02-04 04:14:29.016103 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:14:29.016114 | orchestrator | 2026-02-04 04:14:29.016125 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-04 04:14:29.016136 | orchestrator | Wednesday 04 February 2026 04:14:21 +0000 (0:00:02.528) 0:04:55.495 **** 2026-02-04 04:14:29.016147 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:14:29.016158 | orchestrator | 2026-02-04 04:14:29.016169 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-04 04:14:29.016180 | orchestrator | Wednesday 04 February 2026 04:14:25 +0000 (0:00:04.004) 0:04:59.500 **** 2026-02-04 04:14:29.016196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:14:29.016270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 04:14:29.016297 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:29.016332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:14:29.016367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 04:14:29.016400 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:29.016436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:14:32.996234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 04:14:32.996342 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:32.996359 | orchestrator | 2026-02-04 04:14:32.996371 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-04 04:14:32.996383 | orchestrator | Wednesday 04 February 2026 04:14:28 +0000 (0:00:03.897) 0:05:03.397 **** 2026-02-04 04:14:32.996399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:14:32.996441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 04:14:32.996462 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:32.996517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:14:32.996553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 04:14:32.996574 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:32.996593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:14:32.996628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-04 04:14:49.620440 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:49.620517 | orchestrator | 2026-02-04 04:14:49.620525 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-04 04:14:49.620532 | orchestrator | Wednesday 04 February 2026 04:14:32 +0000 (0:00:03.985) 0:05:07.383 **** 2026-02-04 04:14:49.620551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 04:14:49.620574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 04:14:49.620580 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:49.620585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 04:14:49.620591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 04:14:49.620596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 04:14:49.620601 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:49.620606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-04 04:14:49.620611 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:49.620616 | orchestrator | 2026-02-04 04:14:49.620621 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-04 04:14:49.620626 | orchestrator | Wednesday 04 February 2026 04:14:36 +0000 (0:00:03.879) 0:05:11.263 **** 2026-02-04 04:14:49.620631 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:14:49.620645 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:14:49.620650 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:14:49.620655 | orchestrator | 2026-02-04 04:14:49.620660 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-04 04:14:49.620669 | orchestrator | Wednesday 04 February 2026 04:14:40 +0000 (0:00:03.137) 0:05:14.400 **** 2026-02-04 04:14:49.620674 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:49.620679 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:49.620687 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:49.620692 | orchestrator | 2026-02-04 04:14:49.620697 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-04 04:14:49.620702 | orchestrator | Wednesday 04 February 2026 04:14:42 +0000 (0:00:02.723) 0:05:17.123 **** 2026-02-04 04:14:49.620707 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:14:49.620711 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:14:49.620716 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:14:49.620721 | orchestrator | 2026-02-04 04:14:49.620726 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-04 04:14:49.620730 | orchestrator | Wednesday 04 February 2026 04:14:44 +0000 (0:00:01.416) 0:05:18.539 **** 2026-02-04 04:14:49.620735 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:14:49.620740 | orchestrator | 2026-02-04 04:14:49.620745 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-04 04:14:49.620750 | orchestrator | Wednesday 04 February 2026 04:14:46 +0000 (0:00:02.300) 0:05:20.840 **** 2026-02-04 04:14:49.620755 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 04:14:49.620762 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 04:14:49.620768 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 04:14:49.620773 | orchestrator | 2026-02-04 04:14:49.620778 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-04 04:14:49.620784 | orchestrator | Wednesday 04 February 2026 04:14:49 +0000 (0:00:02.624) 0:05:23.464 **** 2026-02-04 04:14:49.620796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 04:15:04.150110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 04:15:04.150193 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:04.150209 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:04.150220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 04:15:04.150230 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:04.150241 | orchestrator | 2026-02-04 04:15:04.150257 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-04 04:15:04.150281 | orchestrator | Wednesday 04 February 2026 04:14:50 +0000 (0:00:01.785) 0:05:25.250 **** 2026-02-04 04:15:04.150300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 04:15:04.150316 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:04.150331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 04:15:04.150347 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:04.150363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-04 04:15:04.150377 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:04.150392 | orchestrator | 2026-02-04 04:15:04.150406 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-04 04:15:04.150443 | orchestrator | Wednesday 04 February 2026 04:14:52 +0000 (0:00:01.507) 0:05:26.757 **** 2026-02-04 04:15:04.150458 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:04.150473 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:04.150490 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:04.150506 | orchestrator | 2026-02-04 04:15:04.150522 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-04 04:15:04.150534 | orchestrator | Wednesday 04 February 2026 04:14:53 +0000 (0:00:01.503) 0:05:28.260 **** 2026-02-04 04:15:04.150543 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:04.150552 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:04.150560 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:04.150569 | orchestrator | 2026-02-04 04:15:04.150578 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-04 04:15:04.150586 | orchestrator | Wednesday 04 February 2026 04:14:56 +0000 (0:00:02.212) 0:05:30.473 **** 2026-02-04 04:15:04.150595 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:04.150604 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:04.150612 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:04.150621 | orchestrator | 2026-02-04 04:15:04.150632 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-04 04:15:04.150643 | orchestrator | Wednesday 04 February 2026 04:14:57 +0000 (0:00:01.653) 0:05:32.126 **** 2026-02-04 04:15:04.150654 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:15:04.150664 | orchestrator | 2026-02-04 04:15:04.150674 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-04 04:15:04.150685 | orchestrator | Wednesday 04 February 2026 04:14:59 +0000 (0:00:01.992) 0:05:34.118 **** 2026-02-04 04:15:04.150719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:04.150736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.150749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-04 04:15:04.150769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-04 04:15:04.150792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.388708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:04.388791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:04.388808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 04:15:04.388886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:04.388901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.388913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-04 04:15:04.388954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:04.388968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.388982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 04:15:04.389034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:04.389048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:04.389074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.506205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-04 04:15:04.506295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-04 04:15:04.506305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.506321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:04.506339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:04.506346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.506356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:04.506362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 04:15:04.506368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-04 04:15:04.506378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:04.506389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-04 04:15:04.678645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.678754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.678780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-04 04:15:04.678801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:04.678875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:04.678901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:04.679045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.679104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 04:15:04.679129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 04:15:04.679151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:04.679178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:04.679199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:04.679247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-04 04:15:07.015257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:07.015346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.015381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 04:15:07.015397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:07.015411 | orchestrator | 2026-02-04 04:15:07.015444 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-04 04:15:07.015457 | orchestrator | Wednesday 04 February 2026 04:15:05 +0000 (0:00:06.119) 0:05:40.238 **** 2026-02-04 04:15:07.015487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:15:07.015502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.015515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-04 04:15:07.015532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-04 04:15:07.015553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.015565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:07.015585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:07.103727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:15:07.103811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 04:15:07.103867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.103902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:07.103934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-04 04:15:07.103950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.103963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-04 04:15:07.103984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-04 04:15:07.104007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:07.104021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.104043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.199921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:07.200022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 04:15:07.200059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:07.200073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:07.200086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 04:15:07.200098 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:07.200127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:07.200141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:15:07.200159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.200178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:07.200191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-04 04:15:07.200212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-04 04:15:08.490586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-04 04:15:08.490729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:08.490760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:08.490778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:08.490792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:08.490804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:08.490889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 04:15:08.490925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-04 04:15:08.490939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:08.490951 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:08.490968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:08.490990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:08.491023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-04 04:15:24.157730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-04 04:15:24.157981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-04 04:15:24.158088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-04 04:15:24.158120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-04 04:15:24.158140 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:24.158160 | orchestrator | 2026-02-04 04:15:24.158179 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-04 04:15:24.158197 | orchestrator | Wednesday 04 February 2026 04:15:08 +0000 (0:00:02.637) 0:05:42.876 **** 2026-02-04 04:15:24.158246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:15:24.158267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:15:24.158289 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:24.158306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:15:24.158371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:15:24.158389 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:24.158416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:15:24.158427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:15:24.158438 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:24.158449 | orchestrator | 2026-02-04 04:15:24.158461 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-04 04:15:24.158472 | orchestrator | Wednesday 04 February 2026 04:15:11 +0000 (0:00:02.911) 0:05:45.788 **** 2026-02-04 04:15:24.158483 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:15:24.158494 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:15:24.158504 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:15:24.158515 | orchestrator | 2026-02-04 04:15:24.158526 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-04 04:15:24.158537 | orchestrator | Wednesday 04 February 2026 04:15:13 +0000 (0:00:02.272) 0:05:48.061 **** 2026-02-04 04:15:24.158547 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:15:24.158566 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:15:24.158577 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:15:24.158588 | orchestrator | 2026-02-04 04:15:24.158599 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-04 04:15:24.158610 | orchestrator | Wednesday 04 February 2026 04:15:16 +0000 (0:00:03.084) 0:05:51.145 **** 2026-02-04 04:15:24.158620 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:15:24.158631 | orchestrator | 2026-02-04 04:15:24.158642 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-04 04:15:24.158653 | orchestrator | Wednesday 04 February 2026 04:15:19 +0000 (0:00:02.597) 0:05:53.743 **** 2026-02-04 04:15:24.158665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-04 04:15:24.158679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-04 04:15:24.158714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-04 04:15:42.138689 | orchestrator | 2026-02-04 04:15:42.138907 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-04 04:15:42.138943 | orchestrator | Wednesday 04 February 2026 04:15:24 +0000 (0:00:04.801) 0:05:58.544 **** 2026-02-04 04:15:42.138991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-04 04:15:42.139019 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:42.139043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-04 04:15:42.139063 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:42.139082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-04 04:15:42.139133 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:42.139155 | orchestrator | 2026-02-04 04:15:42.139174 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-04 04:15:42.139192 | orchestrator | Wednesday 04 February 2026 04:15:25 +0000 (0:00:01.565) 0:06:00.110 **** 2026-02-04 04:15:42.139214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:15:42.139262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:15:42.139286 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:42.139305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:15:42.139334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:15:42.139354 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:42.139375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:15:42.139394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:15:42.139413 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:15:42.139434 | orchestrator | 2026-02-04 04:15:42.139452 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-04 04:15:42.139471 | orchestrator | Wednesday 04 February 2026 04:15:27 +0000 (0:00:01.977) 0:06:02.087 **** 2026-02-04 04:15:42.139490 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:15:42.139509 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:15:42.139527 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:15:42.139545 | orchestrator | 2026-02-04 04:15:42.139564 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-04 04:15:42.139582 | orchestrator | Wednesday 04 February 2026 04:15:30 +0000 (0:00:02.433) 0:06:04.520 **** 2026-02-04 04:15:42.139600 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:15:42.139618 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:15:42.139636 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:15:42.139652 | orchestrator | 2026-02-04 04:15:42.139671 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-04 04:15:42.139705 | orchestrator | Wednesday 04 February 2026 04:15:33 +0000 (0:00:03.050) 0:06:07.571 **** 2026-02-04 04:15:42.139725 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:15:42.139742 | orchestrator | 2026-02-04 04:15:42.139761 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-04 04:15:42.139779 | orchestrator | Wednesday 04 February 2026 04:15:35 +0000 (0:00:02.441) 0:06:10.012 **** 2026-02-04 04:15:42.139799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:42.139866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:43.283723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:43.283893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:43.283961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:15:43.283976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:15:43.284028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:43.284061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:15:43.284080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:15:43.284113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:15:43.284134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:15:43.284151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:15:43.284169 | orchestrator | 2026-02-04 04:15:43.284188 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-04 04:15:43.284219 | orchestrator | Wednesday 04 February 2026 04:15:43 +0000 (0:00:07.660) 0:06:17.672 **** 2026-02-04 04:15:44.020396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:15:44.020543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:15:44.020560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:15:44.020571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:15:44.020580 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:15:44.020610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:15:44.020621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:15:44.020637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:15:44.020645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:15:44.020653 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:15:44.020662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:15:44.020682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:16:05.954710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-04 04:16:05.954878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-04 04:16:05.954898 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:05.954911 | orchestrator | 2026-02-04 04:16:05.954922 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-04 04:16:05.954933 | orchestrator | Wednesday 04 February 2026 04:15:45 +0000 (0:00:01.883) 0:06:19.556 **** 2026-02-04 04:16:05.954944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.954958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.954970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.954981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955004 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:05.955025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955101 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:05.955113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:16:05.955169 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:05.955179 | orchestrator | 2026-02-04 04:16:05.955188 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-04 04:16:05.955198 | orchestrator | Wednesday 04 February 2026 04:15:47 +0000 (0:00:02.692) 0:06:22.249 **** 2026-02-04 04:16:05.955208 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:16:05.955218 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:16:05.955227 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:16:05.955237 | orchestrator | 2026-02-04 04:16:05.955247 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-04 04:16:05.955259 | orchestrator | Wednesday 04 February 2026 04:15:50 +0000 (0:00:02.345) 0:06:24.595 **** 2026-02-04 04:16:05.955271 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:16:05.955282 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:16:05.955294 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:16:05.955305 | orchestrator | 2026-02-04 04:16:05.955318 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-04 04:16:05.955330 | orchestrator | Wednesday 04 February 2026 04:15:53 +0000 (0:00:03.028) 0:06:27.623 **** 2026-02-04 04:16:05.955342 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:16:05.955353 | orchestrator | 2026-02-04 04:16:05.955365 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-04 04:16:05.955376 | orchestrator | Wednesday 04 February 2026 04:15:56 +0000 (0:00:02.839) 0:06:30.462 **** 2026-02-04 04:16:05.955388 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-04 04:16:05.955401 | orchestrator | 2026-02-04 04:16:05.955412 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-04 04:16:05.955424 | orchestrator | Wednesday 04 February 2026 04:15:57 +0000 (0:00:01.657) 0:06:32.120 **** 2026-02-04 04:16:05.955436 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 04:16:05.955451 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 04:16:05.955472 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-04 04:16:05.955484 | orchestrator | 2026-02-04 04:16:05.955501 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-04 04:16:05.955514 | orchestrator | Wednesday 04 February 2026 04:16:03 +0000 (0:00:05.809) 0:06:37.930 **** 2026-02-04 04:16:05.955527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:05.955545 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:29.312318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:29.312481 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:29.312501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:29.312515 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:29.312526 | orchestrator | 2026-02-04 04:16:29.312539 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-04 04:16:29.312551 | orchestrator | Wednesday 04 February 2026 04:16:05 +0000 (0:00:02.411) 0:06:40.342 **** 2026-02-04 04:16:29.312572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 04:16:29.312597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 04:16:29.312618 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:29.312636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 04:16:29.312695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 04:16:29.312716 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:29.312735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 04:16:29.312749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-04 04:16:29.312761 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:29.312773 | orchestrator | 2026-02-04 04:16:29.312819 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 04:16:29.312832 | orchestrator | Wednesday 04 February 2026 04:16:08 +0000 (0:00:02.540) 0:06:42.882 **** 2026-02-04 04:16:29.312845 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:16:29.312859 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:16:29.312872 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:16:29.312884 | orchestrator | 2026-02-04 04:16:29.312896 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 04:16:29.312909 | orchestrator | Wednesday 04 February 2026 04:16:12 +0000 (0:00:03.814) 0:06:46.697 **** 2026-02-04 04:16:29.312921 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:16:29.312933 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:16:29.312945 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:16:29.312987 | orchestrator | 2026-02-04 04:16:29.313008 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-04 04:16:29.313028 | orchestrator | Wednesday 04 February 2026 04:16:16 +0000 (0:00:04.061) 0:06:50.759 **** 2026-02-04 04:16:29.313049 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-04 04:16:29.313071 | orchestrator | 2026-02-04 04:16:29.313091 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-04 04:16:29.313105 | orchestrator | Wednesday 04 February 2026 04:16:18 +0000 (0:00:01.740) 0:06:52.499 **** 2026-02-04 04:16:29.313138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:29.313152 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:29.313163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:29.313197 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:29.313216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:29.313250 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:29.313268 | orchestrator | 2026-02-04 04:16:29.313288 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-04 04:16:29.313307 | orchestrator | Wednesday 04 February 2026 04:16:20 +0000 (0:00:02.722) 0:06:55.222 **** 2026-02-04 04:16:29.313327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:29.313345 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:29.313366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:29.313386 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:29.313408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-04 04:16:29.313438 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:29.313450 | orchestrator | 2026-02-04 04:16:29.313461 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-04 04:16:29.313472 | orchestrator | Wednesday 04 February 2026 04:16:23 +0000 (0:00:02.522) 0:06:57.745 **** 2026-02-04 04:16:29.313483 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:29.313494 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:29.313505 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:29.313515 | orchestrator | 2026-02-04 04:16:29.313526 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 04:16:29.313537 | orchestrator | Wednesday 04 February 2026 04:16:25 +0000 (0:00:02.245) 0:06:59.991 **** 2026-02-04 04:16:29.313548 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:16:29.313559 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:16:29.313569 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:16:29.313580 | orchestrator | 2026-02-04 04:16:29.313591 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 04:16:29.313602 | orchestrator | Wednesday 04 February 2026 04:16:29 +0000 (0:00:03.703) 0:07:03.694 **** 2026-02-04 04:16:59.263006 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:16:59.263130 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:16:59.263140 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:16:59.263147 | orchestrator | 2026-02-04 04:16:59.263155 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-04 04:16:59.263163 | orchestrator | Wednesday 04 February 2026 04:16:33 +0000 (0:00:04.250) 0:07:07.945 **** 2026-02-04 04:16:59.263191 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-04 04:16:59.263200 | orchestrator | 2026-02-04 04:16:59.263207 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-04 04:16:59.263214 | orchestrator | Wednesday 04 February 2026 04:16:36 +0000 (0:00:02.838) 0:07:10.783 **** 2026-02-04 04:16:59.263225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 04:16:59.263236 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:59.263244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 04:16:59.263250 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:59.263257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 04:16:59.263263 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:59.263269 | orchestrator | 2026-02-04 04:16:59.263275 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-04 04:16:59.263283 | orchestrator | Wednesday 04 February 2026 04:16:39 +0000 (0:00:03.300) 0:07:14.084 **** 2026-02-04 04:16:59.263289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 04:16:59.263295 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:59.263316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 04:16:59.263323 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:59.263346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-04 04:16:59.263358 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:59.263365 | orchestrator | 2026-02-04 04:16:59.263371 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-04 04:16:59.263377 | orchestrator | Wednesday 04 February 2026 04:16:42 +0000 (0:00:02.620) 0:07:16.704 **** 2026-02-04 04:16:59.263383 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:16:59.263389 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:16:59.263395 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:16:59.263401 | orchestrator | 2026-02-04 04:16:59.263407 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-04 04:16:59.263413 | orchestrator | Wednesday 04 February 2026 04:16:44 +0000 (0:00:02.522) 0:07:19.226 **** 2026-02-04 04:16:59.263419 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:16:59.263426 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:16:59.263432 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:16:59.263438 | orchestrator | 2026-02-04 04:16:59.263444 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-04 04:16:59.263450 | orchestrator | Wednesday 04 February 2026 04:16:48 +0000 (0:00:03.601) 0:07:22.828 **** 2026-02-04 04:16:59.263456 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:16:59.263462 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:16:59.263468 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:16:59.263474 | orchestrator | 2026-02-04 04:16:59.263480 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-04 04:16:59.263486 | orchestrator | Wednesday 04 February 2026 04:16:52 +0000 (0:00:04.521) 0:07:27.350 **** 2026-02-04 04:16:59.263492 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:16:59.263499 | orchestrator | 2026-02-04 04:16:59.263505 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-04 04:16:59.263511 | orchestrator | Wednesday 04 February 2026 04:16:55 +0000 (0:00:02.449) 0:07:29.799 **** 2026-02-04 04:16:59.263518 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 04:16:59.263527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 04:16:59.263541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 04:16:59.263559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 04:17:01.522409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:17:01.522534 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 04:17:01.522549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 04:17:01.522560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 04:17:01.522615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 04:17:01.522626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:17:01.522654 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-04 04:17:01.522664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 04:17:01.522679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 04:17:01.522694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 04:17:01.522727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:17:01.522746 | orchestrator | 2026-02-04 04:17:01.522826 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-04 04:17:01.522846 | orchestrator | Wednesday 04 February 2026 04:17:00 +0000 (0:00:05.069) 0:07:34.869 **** 2026-02-04 04:17:01.522876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 04:17:02.673519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 04:17:02.673653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 04:17:02.673671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 04:17:02.673714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:17:02.673727 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:02.673798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 04:17:02.673816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 04:17:02.673850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 04:17:02.673863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 04:17:02.673874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:17:02.673895 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:02.673913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-04 04:17:02.673926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-04 04:17:02.673944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-04 04:17:20.085592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-04 04:17:20.085711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-04 04:17:20.085730 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:20.085744 | orchestrator | 2026-02-04 04:17:20.085814 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-04 04:17:20.085851 | orchestrator | Wednesday 04 February 2026 04:17:02 +0000 (0:00:02.196) 0:07:37.065 **** 2026-02-04 04:17:20.085864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 04:17:20.085878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 04:17:20.085892 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:20.085903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 04:17:20.085914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 04:17:20.085925 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:20.085936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 04:17:20.085963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-04 04:17:20.085982 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:20.086008 | orchestrator | 2026-02-04 04:17:20.086104 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-04 04:17:20.086125 | orchestrator | Wednesday 04 February 2026 04:17:04 +0000 (0:00:02.123) 0:07:39.188 **** 2026-02-04 04:17:20.086139 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:17:20.086152 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:17:20.086165 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:17:20.086177 | orchestrator | 2026-02-04 04:17:20.086190 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-04 04:17:20.086203 | orchestrator | Wednesday 04 February 2026 04:17:07 +0000 (0:00:02.434) 0:07:41.623 **** 2026-02-04 04:17:20.086216 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:17:20.086229 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:17:20.086241 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:17:20.086254 | orchestrator | 2026-02-04 04:17:20.086266 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-04 04:17:20.086279 | orchestrator | Wednesday 04 February 2026 04:17:10 +0000 (0:00:03.146) 0:07:44.769 **** 2026-02-04 04:17:20.086293 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:17:20.086320 | orchestrator | 2026-02-04 04:17:20.086333 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-04 04:17:20.086345 | orchestrator | Wednesday 04 February 2026 04:17:12 +0000 (0:00:02.586) 0:07:47.356 **** 2026-02-04 04:17:20.086380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:17:20.086411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:17:20.086425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:17:20.086448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:17:20.086474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:17:24.068522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:17:24.068619 | orchestrator | 2026-02-04 04:17:24.068640 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-04 04:17:24.068660 | orchestrator | Wednesday 04 February 2026 04:17:20 +0000 (0:00:07.113) 0:07:54.470 **** 2026-02-04 04:17:24.068698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:17:24.068719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:17:24.068736 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:24.068821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:17:24.068873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:17:24.068886 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:24.068903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:17:24.068914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:17:24.068931 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:24.068941 | orchestrator | 2026-02-04 04:17:24.068952 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-04 04:17:24.068962 | orchestrator | Wednesday 04 February 2026 04:17:22 +0000 (0:00:02.200) 0:07:56.671 **** 2026-02-04 04:17:24.068973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:24.068992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-04 04:17:33.123656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-04 04:17:33.123798 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:33.123819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:33.123832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-04 04:17:33.123845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-04 04:17:33.123857 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:33.123868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:33.123879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-04 04:17:33.123907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-04 04:17:33.123919 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:33.123930 | orchestrator | 2026-02-04 04:17:33.123943 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-04 04:17:33.123955 | orchestrator | Wednesday 04 February 2026 04:17:24 +0000 (0:00:01.790) 0:07:58.462 **** 2026-02-04 04:17:33.123966 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:33.123977 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:33.123988 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:33.123998 | orchestrator | 2026-02-04 04:17:33.124010 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-04 04:17:33.124042 | orchestrator | Wednesday 04 February 2026 04:17:25 +0000 (0:00:01.471) 0:07:59.933 **** 2026-02-04 04:17:33.124053 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:33.124064 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:33.124075 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:33.124085 | orchestrator | 2026-02-04 04:17:33.124096 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-04 04:17:33.124107 | orchestrator | Wednesday 04 February 2026 04:17:27 +0000 (0:00:02.380) 0:08:02.313 **** 2026-02-04 04:17:33.124118 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:17:33.124129 | orchestrator | 2026-02-04 04:17:33.124140 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-04 04:17:33.124150 | orchestrator | Wednesday 04 February 2026 04:17:30 +0000 (0:00:02.622) 0:08:04.936 **** 2026-02-04 04:17:33.124183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-04 04:17:33.124200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 04:17:33.124215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:33.124229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:33.124244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-04 04:17:33.124267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 04:17:33.124313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 04:17:33.124337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:35.121026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:35.121141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 04:17:35.121180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-04 04:17:35.121217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 04:17:35.121231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:35.121242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:35.121278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 04:17:35.121302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:17:35.121332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-04 04:17:35.121345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:35.121357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:17:35.121377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.399345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 04:17:37.399463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-04 04:17:37.399501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.399514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.399528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:17:37.399542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 04:17:37.399573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-04 04:17:37.399602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.399615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.399627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 04:17:37.399639 | orchestrator | 2026-02-04 04:17:37.399652 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-04 04:17:37.399665 | orchestrator | Wednesday 04 February 2026 04:17:36 +0000 (0:00:05.848) 0:08:10.785 **** 2026-02-04 04:17:37.399677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-04 04:17:37.399698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 04:17:37.537961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.538169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.538194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 04:17:37.538214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:17:37.538231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-04 04:17:37.538267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.538292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.538312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 04:17:37.538327 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:37.538343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-04 04:17:37.538357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 04:17:37.538371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.538384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:37.538416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-04 04:17:38.722931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 04:17:38.723040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-04 04:17:38.723058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:17:38.723073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:38.723086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-04 04:17:38.723155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:38.723170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:38.723182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-04 04:17:38.723194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:38.723206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:17:38.723228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 04:17:38.723240 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:38.723267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-04 04:17:51.723168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:51.723269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:17:51.723282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-04 04:17:51.723291 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:51.723302 | orchestrator | 2026-02-04 04:17:51.723312 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-04 04:17:51.723321 | orchestrator | Wednesday 04 February 2026 04:17:38 +0000 (0:00:02.325) 0:08:13.111 **** 2026-02-04 04:17:51.723330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-04 04:17:51.723342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-04 04:17:51.723374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:51.723383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:51.723392 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:51.723401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-04 04:17:51.723422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-04 04:17:51.723445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:51.723454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:51.723462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-04 04:17:51.723470 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:51.723479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-04 04:17:51.723487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:51.723495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-04 04:17:51.723510 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:51.723518 | orchestrator | 2026-02-04 04:17:51.723526 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-04 04:17:51.723534 | orchestrator | Wednesday 04 February 2026 04:17:40 +0000 (0:00:02.144) 0:08:15.255 **** 2026-02-04 04:17:51.723542 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:51.723550 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:51.723557 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:51.723565 | orchestrator | 2026-02-04 04:17:51.723573 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-04 04:17:51.723581 | orchestrator | Wednesday 04 February 2026 04:17:42 +0000 (0:00:02.090) 0:08:17.345 **** 2026-02-04 04:17:51.723588 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:17:51.723596 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:17:51.723604 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:17:51.723611 | orchestrator | 2026-02-04 04:17:51.723619 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-04 04:17:51.723627 | orchestrator | Wednesday 04 February 2026 04:17:45 +0000 (0:00:02.454) 0:08:19.800 **** 2026-02-04 04:17:51.723635 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:17:51.723643 | orchestrator | 2026-02-04 04:17:51.723650 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-04 04:17:51.723658 | orchestrator | Wednesday 04 February 2026 04:17:47 +0000 (0:00:02.398) 0:08:22.199 **** 2026-02-04 04:17:51.723677 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:18:09.417589 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:18:09.417701 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:18:09.417817 | orchestrator | 2026-02-04 04:18:09.417830 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-04 04:18:09.417837 | orchestrator | Wednesday 04 February 2026 04:17:51 +0000 (0:00:03.907) 0:08:26.107 **** 2026-02-04 04:18:09.417845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:18:09.417880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:18:09.417888 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:09.417896 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:09.417902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:18:09.417916 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:09.417922 | orchestrator | 2026-02-04 04:18:09.417929 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-04 04:18:09.417935 | orchestrator | Wednesday 04 February 2026 04:17:53 +0000 (0:00:01.538) 0:08:27.645 **** 2026-02-04 04:18:09.417943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 04:18:09.417951 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:09.417957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 04:18:09.417964 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:09.417970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-04 04:18:09.417976 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:09.417982 | orchestrator | 2026-02-04 04:18:09.417988 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-04 04:18:09.417995 | orchestrator | Wednesday 04 February 2026 04:17:54 +0000 (0:00:01.409) 0:08:29.054 **** 2026-02-04 04:18:09.418001 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:09.418007 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:09.418013 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:09.418059 | orchestrator | 2026-02-04 04:18:09.418066 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-04 04:18:09.418072 | orchestrator | Wednesday 04 February 2026 04:17:56 +0000 (0:00:02.027) 0:08:31.082 **** 2026-02-04 04:18:09.418078 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:09.418085 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:09.418091 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:09.418097 | orchestrator | 2026-02-04 04:18:09.418103 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-04 04:18:09.418111 | orchestrator | Wednesday 04 February 2026 04:17:58 +0000 (0:00:02.228) 0:08:33.310 **** 2026-02-04 04:18:09.418118 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:18:09.418126 | orchestrator | 2026-02-04 04:18:09.418133 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-04 04:18:09.418141 | orchestrator | Wednesday 04 February 2026 04:18:01 +0000 (0:00:02.357) 0:08:35.668 **** 2026-02-04 04:18:09.418153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-04 04:18:09.418169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-04 04:18:11.173441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-04 04:18:11.173547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-04 04:18:11.173584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-04 04:18:11.173636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-04 04:18:11.173651 | orchestrator | 2026-02-04 04:18:11.173664 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-04 04:18:11.173677 | orchestrator | Wednesday 04 February 2026 04:18:09 +0000 (0:00:08.138) 0:08:43.807 **** 2026-02-04 04:18:11.173690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-04 04:18:11.173702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-04 04:18:11.173714 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:11.173766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-04 04:18:11.173800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-04 04:18:33.248500 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:33.248654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-04 04:18:33.248692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-04 04:18:33.248717 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:33.248824 | orchestrator | 2026-02-04 04:18:33.248845 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-04 04:18:33.248898 | orchestrator | Wednesday 04 February 2026 04:18:11 +0000 (0:00:01.756) 0:08:45.564 **** 2026-02-04 04:18:33.248939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-04 04:18:33.248964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-04 04:18:33.248986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:18:33.249008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:18:33.249028 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:33.249047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-04 04:18:33.249065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-04 04:18:33.249108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:18:33.249130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:18:33.249149 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:33.249168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-04 04:18:33.249186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-04 04:18:33.249204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:18:33.249224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-04 04:18:33.249245 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:33.249267 | orchestrator | 2026-02-04 04:18:33.249290 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-04 04:18:33.249328 | orchestrator | Wednesday 04 February 2026 04:18:13 +0000 (0:00:02.198) 0:08:47.762 **** 2026-02-04 04:18:33.249351 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:18:33.249372 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:18:33.249392 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:18:33.249411 | orchestrator | 2026-02-04 04:18:33.249430 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-04 04:18:33.249449 | orchestrator | Wednesday 04 February 2026 04:18:15 +0000 (0:00:02.330) 0:08:50.093 **** 2026-02-04 04:18:33.249468 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:18:33.249487 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:18:33.249505 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:18:33.249524 | orchestrator | 2026-02-04 04:18:33.249541 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-04 04:18:33.249558 | orchestrator | Wednesday 04 February 2026 04:18:18 +0000 (0:00:03.032) 0:08:53.126 **** 2026-02-04 04:18:33.249575 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:33.249593 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:33.249610 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:33.249629 | orchestrator | 2026-02-04 04:18:33.249647 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-04 04:18:33.249677 | orchestrator | Wednesday 04 February 2026 04:18:20 +0000 (0:00:01.459) 0:08:54.585 **** 2026-02-04 04:18:33.249694 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:33.249710 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:33.249759 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:33.249778 | orchestrator | 2026-02-04 04:18:33.249797 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-04 04:18:33.249812 | orchestrator | Wednesday 04 February 2026 04:18:21 +0000 (0:00:01.471) 0:08:56.057 **** 2026-02-04 04:18:33.249829 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:33.249846 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:33.249862 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:33.249879 | orchestrator | 2026-02-04 04:18:33.249897 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-04 04:18:33.249914 | orchestrator | Wednesday 04 February 2026 04:18:23 +0000 (0:00:01.840) 0:08:57.898 **** 2026-02-04 04:18:33.249932 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:33.249950 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:33.249968 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:33.249987 | orchestrator | 2026-02-04 04:18:33.250005 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-04 04:18:33.250096 | orchestrator | Wednesday 04 February 2026 04:18:24 +0000 (0:00:01.380) 0:08:59.278 **** 2026-02-04 04:18:33.250109 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:33.250120 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:18:33.250130 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:18:33.250141 | orchestrator | 2026-02-04 04:18:33.250151 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-04 04:18:33.250162 | orchestrator | Wednesday 04 February 2026 04:18:26 +0000 (0:00:01.415) 0:09:00.694 **** 2026-02-04 04:18:33.250173 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:18:33.250185 | orchestrator | 2026-02-04 04:18:33.250196 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-04 04:18:33.250206 | orchestrator | Wednesday 04 February 2026 04:18:29 +0000 (0:00:02.760) 0:09:03.454 **** 2026-02-04 04:18:33.250236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-04 04:18:37.525403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-04 04:18:37.525506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-04 04:18:37.525522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:18:37.525553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:18:37.525565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-04 04:18:37.525578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:18:37.525634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:18:37.525648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-04 04:18:37.525661 | orchestrator | 2026-02-04 04:18:37.525673 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-04 04:18:37.525686 | orchestrator | Wednesday 04 February 2026 04:18:33 +0000 (0:00:04.181) 0:09:07.636 **** 2026-02-04 04:18:37.525698 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:18:37.525709 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:18:37.525751 | orchestrator | } 2026-02-04 04:18:37.525764 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:18:37.525774 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:18:37.525785 | orchestrator | } 2026-02-04 04:18:37.525796 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:18:37.525807 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:18:37.525817 | orchestrator | } 2026-02-04 04:18:37.525828 | orchestrator | 2026-02-04 04:18:37.525839 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:18:37.525850 | orchestrator | Wednesday 04 February 2026 04:18:34 +0000 (0:00:01.368) 0:09:09.004 **** 2026-02-04 04:18:37.525869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-04 04:18:37.525881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:18:37.525893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:18:37.525921 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:18:37.525935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-04 04:18:37.525957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:20:40.149448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:20:40.149563 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:20:40.149583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-04 04:20:40.149602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-04 04:20:40.149629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-04 04:20:40.149733 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:20:40.149754 | orchestrator | 2026-02-04 04:20:40.149773 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-04 04:20:40.149791 | orchestrator | Wednesday 04 February 2026 04:18:37 +0000 (0:00:02.904) 0:09:11.908 **** 2026-02-04 04:20:40.149806 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:20:40.149816 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:20:40.149826 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:20:40.149835 | orchestrator | 2026-02-04 04:20:40.149845 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-04 04:20:40.149855 | orchestrator | Wednesday 04 February 2026 04:18:39 +0000 (0:00:01.822) 0:09:13.731 **** 2026-02-04 04:20:40.149865 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:20:40.149874 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:20:40.149884 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:20:40.149893 | orchestrator | 2026-02-04 04:20:40.149903 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-04 04:20:40.149912 | orchestrator | Wednesday 04 February 2026 04:18:40 +0000 (0:00:01.399) 0:09:15.130 **** 2026-02-04 04:20:40.149922 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:20:40.149932 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:20:40.149941 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:20:40.149951 | orchestrator | 2026-02-04 04:20:40.149960 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-04 04:20:40.149970 | orchestrator | Wednesday 04 February 2026 04:18:47 +0000 (0:00:07.192) 0:09:22.323 **** 2026-02-04 04:20:40.149984 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:20:40.149995 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:20:40.150006 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:20:40.150078 | orchestrator | 2026-02-04 04:20:40.150094 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-04 04:20:40.150106 | orchestrator | Wednesday 04 February 2026 04:18:55 +0000 (0:00:07.529) 0:09:29.852 **** 2026-02-04 04:20:40.150118 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:20:40.150129 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:20:40.150140 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:20:40.150151 | orchestrator | 2026-02-04 04:20:40.150162 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-04 04:20:40.150173 | orchestrator | Wednesday 04 February 2026 04:19:02 +0000 (0:00:07.138) 0:09:36.991 **** 2026-02-04 04:20:40.150187 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:20:40.150205 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:20:40.150222 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:20:40.150238 | orchestrator | 2026-02-04 04:20:40.150279 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-04 04:20:40.150300 | orchestrator | Wednesday 04 February 2026 04:19:10 +0000 (0:00:07.573) 0:09:44.564 **** 2026-02-04 04:20:40.150315 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:20:40.150327 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:20:40.150339 | orchestrator | 2026-02-04 04:20:40.150350 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-04 04:20:40.150362 | orchestrator | Wednesday 04 February 2026 04:19:14 +0000 (0:00:03.933) 0:09:48.498 **** 2026-02-04 04:20:40.150413 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:20:40.150430 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:20:40.150446 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:20:40.150462 | orchestrator | 2026-02-04 04:20:40.150477 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-04 04:20:40.150492 | orchestrator | Wednesday 04 February 2026 04:19:27 +0000 (0:00:13.252) 0:10:01.751 **** 2026-02-04 04:20:40.150508 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:20:40.150524 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:20:40.150540 | orchestrator | 2026-02-04 04:20:40.150557 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-04 04:20:40.150587 | orchestrator | Wednesday 04 February 2026 04:19:31 +0000 (0:00:04.624) 0:10:06.375 **** 2026-02-04 04:20:40.150598 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:20:40.150608 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:20:40.150617 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:20:40.150626 | orchestrator | 2026-02-04 04:20:40.150636 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-04 04:20:40.150646 | orchestrator | Wednesday 04 February 2026 04:19:38 +0000 (0:00:06.915) 0:10:13.291 **** 2026-02-04 04:20:40.150656 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:20:40.150699 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:20:40.150711 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:20:40.150721 | orchestrator | 2026-02-04 04:20:40.150731 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-04 04:20:40.150741 | orchestrator | Wednesday 04 February 2026 04:19:45 +0000 (0:00:06.837) 0:10:20.129 **** 2026-02-04 04:20:40.150751 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:20:40.150760 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:20:40.150776 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:20:40.150786 | orchestrator | 2026-02-04 04:20:40.150796 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-04 04:20:40.150806 | orchestrator | Wednesday 04 February 2026 04:19:52 +0000 (0:00:06.811) 0:10:26.941 **** 2026-02-04 04:20:40.150815 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:20:40.150825 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:20:40.150836 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:20:40.150852 | orchestrator | 2026-02-04 04:20:40.150868 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-04 04:20:40.150884 | orchestrator | Wednesday 04 February 2026 04:19:59 +0000 (0:00:06.889) 0:10:33.830 **** 2026-02-04 04:20:40.150901 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:20:40.150916 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:20:40.150926 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:20:40.150936 | orchestrator | 2026-02-04 04:20:40.150945 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-04 04:20:40.150955 | orchestrator | Wednesday 04 February 2026 04:20:06 +0000 (0:00:07.390) 0:10:41.221 **** 2026-02-04 04:20:40.150964 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:20:40.150973 | orchestrator | 2026-02-04 04:20:40.150983 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-04 04:20:40.150992 | orchestrator | Wednesday 04 February 2026 04:20:10 +0000 (0:00:03.671) 0:10:44.892 **** 2026-02-04 04:20:40.151002 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:20:40.151011 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:20:40.151021 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:20:40.151030 | orchestrator | 2026-02-04 04:20:40.151039 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-04 04:20:40.151049 | orchestrator | Wednesday 04 February 2026 04:20:23 +0000 (0:00:13.005) 0:10:57.898 **** 2026-02-04 04:20:40.151058 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:20:40.151068 | orchestrator | 2026-02-04 04:20:40.151077 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-04 04:20:40.151087 | orchestrator | Wednesday 04 February 2026 04:20:28 +0000 (0:00:04.617) 0:11:02.516 **** 2026-02-04 04:20:40.151096 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:20:40.151106 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:20:40.151115 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:20:40.151125 | orchestrator | 2026-02-04 04:20:40.151134 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-04 04:20:40.151144 | orchestrator | Wednesday 04 February 2026 04:20:35 +0000 (0:00:07.002) 0:11:09.519 **** 2026-02-04 04:20:40.151153 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:20:40.151163 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:20:40.151173 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:20:40.151182 | orchestrator | 2026-02-04 04:20:40.151200 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-04 04:20:40.151210 | orchestrator | Wednesday 04 February 2026 04:20:37 +0000 (0:00:02.106) 0:11:11.626 **** 2026-02-04 04:20:40.151219 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:20:40.151228 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:20:40.151239 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:20:40.151256 | orchestrator | 2026-02-04 04:20:40.151272 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:20:40.151290 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-04 04:20:40.151308 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-04 04:20:40.151338 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-04 04:20:41.131208 | orchestrator | 2026-02-04 04:20:41.131335 | orchestrator | 2026-02-04 04:20:41.131354 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:20:41.131368 | orchestrator | Wednesday 04 February 2026 04:20:40 +0000 (0:00:02.901) 0:11:14.527 **** 2026-02-04 04:20:41.131380 | orchestrator | =============================================================================== 2026-02-04 04:20:41.131390 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.25s 2026-02-04 04:20:41.131401 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 13.01s 2026-02-04 04:20:41.131412 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.14s 2026-02-04 04:20:41.131423 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 7.66s 2026-02-04 04:20:41.131434 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.57s 2026-02-04 04:20:41.131444 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.53s 2026-02-04 04:20:41.131455 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.39s 2026-02-04 04:20:41.131466 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.19s 2026-02-04 04:20:41.131476 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.14s 2026-02-04 04:20:41.131487 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.11s 2026-02-04 04:20:41.131498 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 7.00s 2026-02-04 04:20:41.131508 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 6.92s 2026-02-04 04:20:41.131519 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.89s 2026-02-04 04:20:41.131530 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.84s 2026-02-04 04:20:41.131541 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.81s 2026-02-04 04:20:41.131573 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.12s 2026-02-04 04:20:41.131585 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.85s 2026-02-04 04:20:41.131595 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.81s 2026-02-04 04:20:41.131606 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.68s 2026-02-04 04:20:41.131617 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.28s 2026-02-04 04:20:41.475073 | orchestrator | + osism apply -a upgrade opensearch 2026-02-04 04:20:43.607502 | orchestrator | 2026-02-04 04:20:43 | INFO  | Task 7b6d69b6-5d89-4b70-9bea-57ed5778c20e (opensearch) was prepared for execution. 2026-02-04 04:20:43.607626 | orchestrator | 2026-02-04 04:20:43 | INFO  | It takes a moment until task 7b6d69b6-5d89-4b70-9bea-57ed5778c20e (opensearch) has been started and output is visible here. 2026-02-04 04:21:03.945526 | orchestrator | 2026-02-04 04:21:03.945728 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 04:21:03.945747 | orchestrator | 2026-02-04 04:21:03.945759 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 04:21:03.945769 | orchestrator | Wednesday 04 February 2026 04:20:50 +0000 (0:00:01.557) 0:00:01.557 **** 2026-02-04 04:21:03.945780 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:21:03.945792 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:21:03.945802 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:21:03.945811 | orchestrator | 2026-02-04 04:21:03.945822 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 04:21:03.945832 | orchestrator | Wednesday 04 February 2026 04:20:52 +0000 (0:00:02.069) 0:00:03.627 **** 2026-02-04 04:21:03.945843 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-04 04:21:03.945853 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-04 04:21:03.945863 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-04 04:21:03.945873 | orchestrator | 2026-02-04 04:21:03.945882 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-04 04:21:03.945892 | orchestrator | 2026-02-04 04:21:03.945902 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 04:21:03.945911 | orchestrator | Wednesday 04 February 2026 04:20:56 +0000 (0:00:03.936) 0:00:07.563 **** 2026-02-04 04:21:03.945922 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:21:03.945933 | orchestrator | 2026-02-04 04:21:03.945942 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-04 04:21:03.945952 | orchestrator | Wednesday 04 February 2026 04:20:57 +0000 (0:00:01.672) 0:00:09.236 **** 2026-02-04 04:21:03.945962 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 04:21:03.945971 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 04:21:03.945981 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-04 04:21:03.945991 | orchestrator | 2026-02-04 04:21:03.946000 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-04 04:21:03.946010 | orchestrator | Wednesday 04 February 2026 04:20:59 +0000 (0:00:02.117) 0:00:11.354 **** 2026-02-04 04:21:03.946087 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:03.946125 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:03.946187 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:03.946205 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:03.946221 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:03.946240 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:03.946261 | orchestrator | 2026-02-04 04:21:03.946272 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 04:21:03.946284 | orchestrator | Wednesday 04 February 2026 04:21:02 +0000 (0:00:02.410) 0:00:13.764 **** 2026-02-04 04:21:03.946296 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:21:03.946308 | orchestrator | 2026-02-04 04:21:03.946327 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-04 04:21:09.378936 | orchestrator | Wednesday 04 February 2026 04:21:03 +0000 (0:00:01.708) 0:00:15.472 **** 2026-02-04 04:21:09.379078 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:09.379101 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:09.379114 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:09.379179 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:09.379218 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:09.379232 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:09.379245 | orchestrator | 2026-02-04 04:21:09.379258 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-04 04:21:09.379270 | orchestrator | Wednesday 04 February 2026 04:21:07 +0000 (0:00:03.551) 0:00:19.024 **** 2026-02-04 04:21:09.379282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:21:09.379318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:21:11.174738 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:21:11.174824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:21:11.174839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:21:11.174869 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:21:11.174890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:21:11.174913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:21:11.174922 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:21:11.174929 | orchestrator | 2026-02-04 04:21:11.174938 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-04 04:21:11.174947 | orchestrator | Wednesday 04 February 2026 04:21:09 +0000 (0:00:01.884) 0:00:20.908 **** 2026-02-04 04:21:11.174955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:21:11.174963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:21:11.174976 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:21:11.174988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:21:11.175003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:21:14.957450 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:21:14.957563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:21:14.957585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:21:14.957621 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:21:14.957633 | orchestrator | 2026-02-04 04:21:14.957645 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-04 04:21:14.957731 | orchestrator | Wednesday 04 February 2026 04:21:11 +0000 (0:00:01.788) 0:00:22.697 **** 2026-02-04 04:21:14.957760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:14.957792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:14.957805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:14.957828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:14.957846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:14.957868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:28.642708 | orchestrator | 2026-02-04 04:21:28.642824 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-04 04:21:28.642842 | orchestrator | Wednesday 04 February 2026 04:21:14 +0000 (0:00:03.790) 0:00:26.488 **** 2026-02-04 04:21:28.642855 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:21:28.642892 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:21:28.642904 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:21:28.642915 | orchestrator | 2026-02-04 04:21:28.642926 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-04 04:21:28.642937 | orchestrator | Wednesday 04 February 2026 04:21:18 +0000 (0:00:03.631) 0:00:30.120 **** 2026-02-04 04:21:28.642948 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:21:28.642959 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:21:28.642970 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:21:28.642980 | orchestrator | 2026-02-04 04:21:28.642991 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-04 04:21:28.643002 | orchestrator | Wednesday 04 February 2026 04:21:21 +0000 (0:00:02.968) 0:00:33.088 **** 2026-02-04 04:21:28.643016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:28.643047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:28.643059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-04 04:21:28.643092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:28.643115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:28.643133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-04 04:21:28.643145 | orchestrator | 2026-02-04 04:21:28.643157 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-04 04:21:28.643169 | orchestrator | Wednesday 04 February 2026 04:21:25 +0000 (0:00:03.647) 0:00:36.735 **** 2026-02-04 04:21:28.643180 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:21:28.643192 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:21:28.643205 | orchestrator | } 2026-02-04 04:21:28.643218 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:21:28.643230 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:21:28.643243 | orchestrator | } 2026-02-04 04:21:28.643255 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:21:28.643268 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:21:28.643281 | orchestrator | } 2026-02-04 04:21:28.643294 | orchestrator | 2026-02-04 04:21:28.643307 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:21:28.643327 | orchestrator | Wednesday 04 February 2026 04:21:26 +0000 (0:00:01.416) 0:00:38.152 **** 2026-02-04 04:21:28.643419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:24:34.682960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:24:34.683094 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:24:34.683132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:24:34.683148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:24:34.683182 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:24:34.683215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-04 04:24:34.683228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-04 04:24:34.683240 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:24:34.683252 | orchestrator | 2026-02-04 04:24:34.683265 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 04:24:34.683277 | orchestrator | Wednesday 04 February 2026 04:21:28 +0000 (0:00:02.019) 0:00:40.171 **** 2026-02-04 04:24:34.683288 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:24:34.683299 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:24:34.683315 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:24:34.683326 | orchestrator | 2026-02-04 04:24:34.683337 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 04:24:34.683348 | orchestrator | Wednesday 04 February 2026 04:21:30 +0000 (0:00:01.656) 0:00:41.827 **** 2026-02-04 04:24:34.683359 | orchestrator | 2026-02-04 04:24:34.683370 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 04:24:34.683381 | orchestrator | Wednesday 04 February 2026 04:21:30 +0000 (0:00:00.477) 0:00:42.305 **** 2026-02-04 04:24:34.683392 | orchestrator | 2026-02-04 04:24:34.683402 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-04 04:24:34.683413 | orchestrator | Wednesday 04 February 2026 04:21:31 +0000 (0:00:00.464) 0:00:42.770 **** 2026-02-04 04:24:34.683424 | orchestrator | 2026-02-04 04:24:34.683434 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-04 04:24:34.683445 | orchestrator | Wednesday 04 February 2026 04:21:32 +0000 (0:00:00.816) 0:00:43.587 **** 2026-02-04 04:24:34.683464 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:24:34.683476 | orchestrator | 2026-02-04 04:24:34.683487 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-04 04:24:34.683498 | orchestrator | Wednesday 04 February 2026 04:21:35 +0000 (0:00:03.684) 0:00:47.272 **** 2026-02-04 04:24:34.683508 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:24:34.683519 | orchestrator | 2026-02-04 04:24:34.683531 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-04 04:24:34.683549 | orchestrator | Wednesday 04 February 2026 04:21:43 +0000 (0:00:07.805) 0:00:55.077 **** 2026-02-04 04:24:34.683643 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:24:34.683664 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:24:34.683681 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:24:34.683701 | orchestrator | 2026-02-04 04:24:34.683720 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-04 04:24:34.683740 | orchestrator | Wednesday 04 February 2026 04:22:51 +0000 (0:01:07.486) 0:02:02.564 **** 2026-02-04 04:24:34.683751 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:24:34.683762 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:24:34.683773 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:24:34.683784 | orchestrator | 2026-02-04 04:24:34.683795 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-04 04:24:34.683806 | orchestrator | Wednesday 04 February 2026 04:24:24 +0000 (0:01:33.602) 0:03:36.167 **** 2026-02-04 04:24:34.683817 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:24:34.683828 | orchestrator | 2026-02-04 04:24:34.683839 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-04 04:24:34.683850 | orchestrator | Wednesday 04 February 2026 04:24:26 +0000 (0:00:01.794) 0:03:37.961 **** 2026-02-04 04:24:34.683860 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:24:34.683871 | orchestrator | 2026-02-04 04:24:34.683881 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-04 04:24:34.683892 | orchestrator | Wednesday 04 February 2026 04:24:29 +0000 (0:00:03.510) 0:03:41.472 **** 2026-02-04 04:24:34.683903 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:24:34.683913 | orchestrator | 2026-02-04 04:24:34.683924 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-04 04:24:34.683935 | orchestrator | Wednesday 04 February 2026 04:24:33 +0000 (0:00:03.435) 0:03:44.907 **** 2026-02-04 04:24:34.683945 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:24:34.683956 | orchestrator | 2026-02-04 04:24:34.683967 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-04 04:24:34.683987 | orchestrator | Wednesday 04 February 2026 04:24:34 +0000 (0:00:01.300) 0:03:46.208 **** 2026-02-04 04:24:37.125343 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:24:37.125440 | orchestrator | 2026-02-04 04:24:37.125455 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:24:37.125469 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:24:37.125481 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 04:24:37.125493 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 04:24:37.125504 | orchestrator | 2026-02-04 04:24:37.125513 | orchestrator | 2026-02-04 04:24:37.125523 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:24:37.125531 | orchestrator | Wednesday 04 February 2026 04:24:36 +0000 (0:00:02.067) 0:03:48.276 **** 2026-02-04 04:24:37.125541 | orchestrator | =============================================================================== 2026-02-04 04:24:37.125649 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 93.60s 2026-02-04 04:24:37.125663 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.49s 2026-02-04 04:24:37.125674 | orchestrator | opensearch : Perform a flush -------------------------------------------- 7.81s 2026-02-04 04:24:37.125684 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.94s 2026-02-04 04:24:37.125707 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.79s 2026-02-04 04:24:37.125719 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.68s 2026-02-04 04:24:37.125740 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.65s 2026-02-04 04:24:37.125751 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.63s 2026-02-04 04:24:37.125761 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.55s 2026-02-04 04:24:37.125790 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.51s 2026-02-04 04:24:37.125801 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.44s 2026-02-04 04:24:37.125812 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.97s 2026-02-04 04:24:37.125824 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.41s 2026-02-04 04:24:37.125835 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.12s 2026-02-04 04:24:37.125845 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.07s 2026-02-04 04:24:37.125855 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.07s 2026-02-04 04:24:37.125865 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.02s 2026-02-04 04:24:37.125875 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.88s 2026-02-04 04:24:37.125883 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.79s 2026-02-04 04:24:37.125890 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.79s 2026-02-04 04:24:37.447807 | orchestrator | + osism apply -a upgrade memcached 2026-02-04 04:24:39.593172 | orchestrator | 2026-02-04 04:24:39 | INFO  | Task be2f5f0f-ccc7-43a1-bf35-d7be51314b51 (memcached) was prepared for execution. 2026-02-04 04:24:39.593278 | orchestrator | 2026-02-04 04:24:39 | INFO  | It takes a moment until task be2f5f0f-ccc7-43a1-bf35-d7be51314b51 (memcached) has been started and output is visible here. 2026-02-04 04:25:13.283077 | orchestrator | 2026-02-04 04:25:13.283229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 04:25:13.283251 | orchestrator | 2026-02-04 04:25:13.283263 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 04:25:13.283275 | orchestrator | Wednesday 04 February 2026 04:24:45 +0000 (0:00:01.487) 0:00:01.487 **** 2026-02-04 04:25:13.283286 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:25:13.283298 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:25:13.283309 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:25:13.283320 | orchestrator | 2026-02-04 04:25:13.283331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 04:25:13.283342 | orchestrator | Wednesday 04 February 2026 04:24:47 +0000 (0:00:01.856) 0:00:03.344 **** 2026-02-04 04:25:13.283354 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-04 04:25:13.283365 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-04 04:25:13.283376 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-04 04:25:13.283387 | orchestrator | 2026-02-04 04:25:13.283398 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-04 04:25:13.283409 | orchestrator | 2026-02-04 04:25:13.283420 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-04 04:25:13.283431 | orchestrator | Wednesday 04 February 2026 04:24:49 +0000 (0:00:01.782) 0:00:05.126 **** 2026-02-04 04:25:13.283469 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:25:13.283481 | orchestrator | 2026-02-04 04:25:13.283492 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-04 04:25:13.283503 | orchestrator | Wednesday 04 February 2026 04:24:51 +0000 (0:00:02.880) 0:00:08.006 **** 2026-02-04 04:25:13.283514 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-04 04:25:13.283525 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-04 04:25:13.283536 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-04 04:25:13.283574 | orchestrator | 2026-02-04 04:25:13.283589 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-04 04:25:13.283602 | orchestrator | Wednesday 04 February 2026 04:24:53 +0000 (0:00:02.058) 0:00:10.065 **** 2026-02-04 04:25:13.283616 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-04 04:25:13.283630 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-04 04:25:13.283642 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-04 04:25:13.283655 | orchestrator | 2026-02-04 04:25:13.283667 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-04 04:25:13.283679 | orchestrator | Wednesday 04 February 2026 04:24:56 +0000 (0:00:02.890) 0:00:12.955 **** 2026-02-04 04:25:13.283696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 04:25:13.283731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 04:25:13.283765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-04 04:25:13.283779 | orchestrator | 2026-02-04 04:25:13.283792 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-04 04:25:13.283805 | orchestrator | Wednesday 04 February 2026 04:24:59 +0000 (0:00:02.229) 0:00:15.185 **** 2026-02-04 04:25:13.283818 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:25:13.283840 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:25:13.283853 | orchestrator | } 2026-02-04 04:25:13.283866 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:25:13.283886 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:25:13.283905 | orchestrator | } 2026-02-04 04:25:13.283925 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:25:13.283944 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:25:13.283961 | orchestrator | } 2026-02-04 04:25:13.283979 | orchestrator | 2026-02-04 04:25:13.283997 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:25:13.284016 | orchestrator | Wednesday 04 February 2026 04:25:00 +0000 (0:00:01.427) 0:00:16.613 **** 2026-02-04 04:25:13.284036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 04:25:13.284054 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:25:13.284072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 04:25:13.284092 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:25:13.284120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-04 04:25:13.284138 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:25:13.284155 | orchestrator | 2026-02-04 04:25:13.284173 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-04 04:25:13.284192 | orchestrator | Wednesday 04 February 2026 04:25:02 +0000 (0:00:02.042) 0:00:18.656 **** 2026-02-04 04:25:13.284211 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:25:13.284229 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:25:13.284247 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:25:13.284258 | orchestrator | 2026-02-04 04:25:13.284269 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:25:13.284281 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:25:13.284303 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:25:13.284315 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:25:13.284325 | orchestrator | 2026-02-04 04:25:13.284336 | orchestrator | 2026-02-04 04:25:13.284347 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:25:13.284368 | orchestrator | Wednesday 04 February 2026 04:25:13 +0000 (0:00:10.725) 0:00:29.381 **** 2026-02-04 04:25:13.653487 | orchestrator | =============================================================================== 2026-02-04 04:25:13.653616 | orchestrator | memcached : Restart memcached container -------------------------------- 10.73s 2026-02-04 04:25:13.653632 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.89s 2026-02-04 04:25:13.653644 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.88s 2026-02-04 04:25:13.653656 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.23s 2026-02-04 04:25:13.653667 | orchestrator | memcached : Ensuring config directories exist --------------------------- 2.06s 2026-02-04 04:25:13.653678 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.04s 2026-02-04 04:25:13.653689 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.86s 2026-02-04 04:25:13.653700 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.78s 2026-02-04 04:25:13.653725 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.43s 2026-02-04 04:25:13.986727 | orchestrator | + osism apply -a upgrade redis 2026-02-04 04:25:16.179654 | orchestrator | 2026-02-04 04:25:16 | INFO  | Task 0daad296-3647-42f5-be5b-d8006032e894 (redis) was prepared for execution. 2026-02-04 04:25:16.179747 | orchestrator | 2026-02-04 04:25:16 | INFO  | It takes a moment until task 0daad296-3647-42f5-be5b-d8006032e894 (redis) has been started and output is visible here. 2026-02-04 04:25:34.233611 | orchestrator | 2026-02-04 04:25:34.233715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 04:25:34.233731 | orchestrator | 2026-02-04 04:25:34.233743 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 04:25:34.233755 | orchestrator | Wednesday 04 February 2026 04:25:22 +0000 (0:00:01.600) 0:00:01.600 **** 2026-02-04 04:25:34.233766 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:25:34.233779 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:25:34.233791 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:25:34.233802 | orchestrator | 2026-02-04 04:25:34.233813 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 04:25:34.233824 | orchestrator | Wednesday 04 February 2026 04:25:24 +0000 (0:00:01.803) 0:00:03.404 **** 2026-02-04 04:25:34.233834 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-04 04:25:34.233846 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-04 04:25:34.233857 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-04 04:25:34.233867 | orchestrator | 2026-02-04 04:25:34.233878 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-04 04:25:34.233890 | orchestrator | 2026-02-04 04:25:34.233901 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-04 04:25:34.233910 | orchestrator | Wednesday 04 February 2026 04:25:25 +0000 (0:00:01.754) 0:00:05.159 **** 2026-02-04 04:25:34.233921 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:25:34.233933 | orchestrator | 2026-02-04 04:25:34.233944 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-04 04:25:34.233956 | orchestrator | Wednesday 04 February 2026 04:25:28 +0000 (0:00:02.809) 0:00:07.969 **** 2026-02-04 04:25:34.233998 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234089 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234106 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234120 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234151 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234164 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234176 | orchestrator | 2026-02-04 04:25:34.234188 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-04 04:25:34.234208 | orchestrator | Wednesday 04 February 2026 04:25:30 +0000 (0:00:02.390) 0:00:10.359 **** 2026-02-04 04:25:34.234221 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234239 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234252 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234264 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:34.234283 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.563270 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.563667 | orchestrator | 2026-02-04 04:25:41.563706 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-04 04:25:41.563729 | orchestrator | Wednesday 04 February 2026 04:25:34 +0000 (0:00:03.217) 0:00:13.576 **** 2026-02-04 04:25:41.563748 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.563792 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.563815 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.563835 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.563854 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.563901 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.563937 | orchestrator | 2026-02-04 04:25:41.563957 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-04 04:25:41.563974 | orchestrator | Wednesday 04 February 2026 04:25:38 +0000 (0:00:04.168) 0:00:17.745 **** 2026-02-04 04:25:41.563992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.564020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.564040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.564058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.564079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:25:41.564114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-04 04:26:09.341331 | orchestrator | 2026-02-04 04:26:09.341458 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-04 04:26:09.341475 | orchestrator | Wednesday 04 February 2026 04:25:41 +0000 (0:00:03.167) 0:00:20.913 **** 2026-02-04 04:26:09.341488 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:26:09.341501 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:26:09.341512 | orchestrator | } 2026-02-04 04:26:09.341573 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:26:09.341587 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:26:09.341598 | orchestrator | } 2026-02-04 04:26:09.341609 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:26:09.341620 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:26:09.341631 | orchestrator | } 2026-02-04 04:26:09.341642 | orchestrator | 2026-02-04 04:26:09.341653 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:26:09.341664 | orchestrator | Wednesday 04 February 2026 04:25:43 +0000 (0:00:01.598) 0:00:22.511 **** 2026-02-04 04:26:09.341694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-04 04:26:09.341710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-04 04:26:09.341723 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:26:09.341734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-04 04:26:09.341746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-04 04:26:09.341781 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:26:09.341793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-04 04:26:09.341824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-04 04:26:09.341839 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:26:09.341852 | orchestrator | 2026-02-04 04:26:09.341865 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 04:26:09.341878 | orchestrator | Wednesday 04 February 2026 04:25:45 +0000 (0:00:01.870) 0:00:24.382 **** 2026-02-04 04:26:09.341892 | orchestrator | 2026-02-04 04:26:09.341905 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 04:26:09.341918 | orchestrator | Wednesday 04 February 2026 04:25:45 +0000 (0:00:00.487) 0:00:24.870 **** 2026-02-04 04:26:09.341932 | orchestrator | 2026-02-04 04:26:09.341944 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-04 04:26:09.341957 | orchestrator | Wednesday 04 February 2026 04:25:45 +0000 (0:00:00.477) 0:00:25.347 **** 2026-02-04 04:26:09.341970 | orchestrator | 2026-02-04 04:26:09.341983 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-04 04:26:09.341994 | orchestrator | Wednesday 04 February 2026 04:25:46 +0000 (0:00:00.828) 0:00:26.176 **** 2026-02-04 04:26:09.342005 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:26:09.342075 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:26:09.342089 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:26:09.342100 | orchestrator | 2026-02-04 04:26:09.342111 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-04 04:26:09.342122 | orchestrator | Wednesday 04 February 2026 04:25:57 +0000 (0:00:10.757) 0:00:36.933 **** 2026-02-04 04:26:09.342133 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:26:09.342143 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:26:09.342154 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:26:09.342165 | orchestrator | 2026-02-04 04:26:09.342176 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:26:09.342189 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:26:09.342201 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:26:09.342212 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:26:09.342223 | orchestrator | 2026-02-04 04:26:09.342234 | orchestrator | 2026-02-04 04:26:09.342254 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:26:09.342265 | orchestrator | Wednesday 04 February 2026 04:26:08 +0000 (0:00:11.309) 0:00:48.243 **** 2026-02-04 04:26:09.342277 | orchestrator | =============================================================================== 2026-02-04 04:26:09.342287 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.31s 2026-02-04 04:26:09.342298 | orchestrator | redis : Restart redis container ---------------------------------------- 10.76s 2026-02-04 04:26:09.342309 | orchestrator | redis : Copying over redis config files --------------------------------- 4.17s 2026-02-04 04:26:09.342320 | orchestrator | redis : Copying over default config.json files -------------------------- 3.22s 2026-02-04 04:26:09.342331 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.17s 2026-02-04 04:26:09.342375 | orchestrator | redis : include_tasks --------------------------------------------------- 2.81s 2026-02-04 04:26:09.342387 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.39s 2026-02-04 04:26:09.342398 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.87s 2026-02-04 04:26:09.342409 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.80s 2026-02-04 04:26:09.342420 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.79s 2026-02-04 04:26:09.342430 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.75s 2026-02-04 04:26:09.342441 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.60s 2026-02-04 04:26:09.679957 | orchestrator | + osism apply -a upgrade mariadb 2026-02-04 04:26:11.897707 | orchestrator | 2026-02-04 04:26:11 | INFO  | Task c9808e7a-d5e1-4737-b77b-7176d3130ac5 (mariadb) was prepared for execution. 2026-02-04 04:26:11.897813 | orchestrator | 2026-02-04 04:26:11 | INFO  | It takes a moment until task c9808e7a-d5e1-4737-b77b-7176d3130ac5 (mariadb) has been started and output is visible here. 2026-02-04 04:26:39.631576 | orchestrator | 2026-02-04 04:26:39.631659 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 04:26:39.631667 | orchestrator | 2026-02-04 04:26:39.631673 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 04:26:39.631679 | orchestrator | Wednesday 04 February 2026 04:26:18 +0000 (0:00:01.639) 0:00:01.639 **** 2026-02-04 04:26:39.631683 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:26:39.631689 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:26:39.631694 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:26:39.631698 | orchestrator | 2026-02-04 04:26:39.631703 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 04:26:39.631708 | orchestrator | Wednesday 04 February 2026 04:26:20 +0000 (0:00:02.197) 0:00:03.836 **** 2026-02-04 04:26:39.631713 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-04 04:26:39.631718 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-04 04:26:39.631723 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-04 04:26:39.631727 | orchestrator | 2026-02-04 04:26:39.631732 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-04 04:26:39.631737 | orchestrator | 2026-02-04 04:26:39.631741 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-04 04:26:39.631746 | orchestrator | Wednesday 04 February 2026 04:26:22 +0000 (0:00:02.144) 0:00:05.981 **** 2026-02-04 04:26:39.631750 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:26:39.631755 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 04:26:39.631760 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 04:26:39.631764 | orchestrator | 2026-02-04 04:26:39.631769 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 04:26:39.631774 | orchestrator | Wednesday 04 February 2026 04:26:24 +0000 (0:00:01.673) 0:00:07.655 **** 2026-02-04 04:26:39.631779 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:26:39.631800 | orchestrator | 2026-02-04 04:26:39.631815 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-04 04:26:39.631820 | orchestrator | Wednesday 04 February 2026 04:26:26 +0000 (0:00:02.608) 0:00:10.264 **** 2026-02-04 04:26:39.631829 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:26:39.631849 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:26:39.631863 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:26:39.631868 | orchestrator | 2026-02-04 04:26:39.631873 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-04 04:26:39.631878 | orchestrator | Wednesday 04 February 2026 04:26:30 +0000 (0:00:04.153) 0:00:14.417 **** 2026-02-04 04:26:39.631883 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:26:39.631888 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:26:39.631893 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:26:39.631898 | orchestrator | 2026-02-04 04:26:39.631902 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-04 04:26:39.631907 | orchestrator | Wednesday 04 February 2026 04:26:32 +0000 (0:00:01.637) 0:00:16.054 **** 2026-02-04 04:26:39.631911 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:26:39.631916 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:26:39.631920 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:26:39.631925 | orchestrator | 2026-02-04 04:26:39.631929 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-04 04:26:39.631934 | orchestrator | Wednesday 04 February 2026 04:26:34 +0000 (0:00:02.351) 0:00:18.406 **** 2026-02-04 04:26:39.631943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:26:52.097493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:26:52.097662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:26:52.097702 | orchestrator | 2026-02-04 04:26:52.097717 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-04 04:26:52.097730 | orchestrator | Wednesday 04 February 2026 04:26:39 +0000 (0:00:04.708) 0:00:23.114 **** 2026-02-04 04:26:52.097741 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:26:52.097753 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:26:52.097763 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:26:52.097776 | orchestrator | 2026-02-04 04:26:52.097787 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-04 04:26:52.097815 | orchestrator | Wednesday 04 February 2026 04:26:41 +0000 (0:00:02.098) 0:00:25.212 **** 2026-02-04 04:26:52.097834 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:26:52.097845 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:26:52.097856 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:26:52.097867 | orchestrator | 2026-02-04 04:26:52.097877 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 04:26:52.097888 | orchestrator | Wednesday 04 February 2026 04:26:46 +0000 (0:00:04.881) 0:00:30.094 **** 2026-02-04 04:26:52.097899 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:26:52.097910 | orchestrator | 2026-02-04 04:26:52.097921 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-04 04:26:52.097933 | orchestrator | Wednesday 04 February 2026 04:26:48 +0000 (0:00:01.935) 0:00:32.029 **** 2026-02-04 04:26:52.097946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:26:52.097958 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:26:52.097983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:26:59.697026 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:26:59.697186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:26:59.697222 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:26:59.697298 | orchestrator | 2026-02-04 04:26:59.697310 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-04 04:26:59.697321 | orchestrator | Wednesday 04 February 2026 04:26:52 +0000 (0:00:03.553) 0:00:35.582 **** 2026-02-04 04:26:59.697333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:26:59.697386 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:26:59.697419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:26:59.697431 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:26:59.697441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:26:59.697459 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:26:59.697474 | orchestrator | 2026-02-04 04:26:59.697491 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-04 04:26:59.697507 | orchestrator | Wednesday 04 February 2026 04:26:55 +0000 (0:00:03.526) 0:00:39.109 **** 2026-02-04 04:26:59.697578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:03.984360 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:03.984471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:03.984517 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:03.984615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:03.984634 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:03.984646 | orchestrator | 2026-02-04 04:27:03.984658 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-04 04:27:03.984670 | orchestrator | Wednesday 04 February 2026 04:26:59 +0000 (0:00:04.072) 0:00:43.182 **** 2026-02-04 04:27:03.984702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:27:03.984732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:27:03.984756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-04 04:27:19.403845 | orchestrator | 2026-02-04 04:27:19.403960 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-04 04:27:19.403988 | orchestrator | Wednesday 04 February 2026 04:27:03 +0000 (0:00:04.290) 0:00:47.472 **** 2026-02-04 04:27:19.404010 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:27:19.404031 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:27:19.404051 | orchestrator | } 2026-02-04 04:27:19.404071 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:27:19.404092 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:27:19.404112 | orchestrator | } 2026-02-04 04:27:19.404124 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:27:19.404135 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:27:19.404146 | orchestrator | } 2026-02-04 04:27:19.404157 | orchestrator | 2026-02-04 04:27:19.404168 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:27:19.404179 | orchestrator | Wednesday 04 February 2026 04:27:05 +0000 (0:00:01.406) 0:00:48.879 **** 2026-02-04 04:27:19.404211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:19.404251 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:19.404286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:19.404300 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:19.404318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:19.404330 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:19.404344 | orchestrator | 2026-02-04 04:27:19.404364 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-04 04:27:19.404395 | orchestrator | Wednesday 04 February 2026 04:27:09 +0000 (0:00:03.970) 0:00:52.850 **** 2026-02-04 04:27:19.404414 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:19.404433 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:19.404452 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:19.404472 | orchestrator | 2026-02-04 04:27:19.404492 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-04 04:27:19.404506 | orchestrator | Wednesday 04 February 2026 04:27:10 +0000 (0:00:01.363) 0:00:54.213 **** 2026-02-04 04:27:19.404520 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:19.404532 | orchestrator | 2026-02-04 04:27:19.404544 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-04 04:27:19.404587 | orchestrator | Wednesday 04 February 2026 04:27:11 +0000 (0:00:01.156) 0:00:55.370 **** 2026-02-04 04:27:19.404600 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:19.404612 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:19.404625 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:19.404638 | orchestrator | 2026-02-04 04:27:19.404651 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-04 04:27:19.404663 | orchestrator | Wednesday 04 February 2026 04:27:13 +0000 (0:00:01.424) 0:00:56.794 **** 2026-02-04 04:27:19.404676 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:19.404689 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:19.404701 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:19.404713 | orchestrator | 2026-02-04 04:27:19.404730 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-04 04:27:19.404749 | orchestrator | Wednesday 04 February 2026 04:27:14 +0000 (0:00:01.685) 0:00:58.480 **** 2026-02-04 04:27:19.404767 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:19.404785 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:19.404804 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:19.404824 | orchestrator | 2026-02-04 04:27:19.404842 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-04 04:27:19.404854 | orchestrator | Wednesday 04 February 2026 04:27:16 +0000 (0:00:01.529) 0:01:00.009 **** 2026-02-04 04:27:19.404864 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:19.404875 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:19.404886 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:19.404897 | orchestrator | 2026-02-04 04:27:19.404908 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-04 04:27:19.404918 | orchestrator | Wednesday 04 February 2026 04:27:17 +0000 (0:00:01.424) 0:01:01.434 **** 2026-02-04 04:27:19.404929 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:19.404940 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:19.404951 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:19.404962 | orchestrator | 2026-02-04 04:27:19.404995 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-04 04:27:37.690515 | orchestrator | Wednesday 04 February 2026 04:27:19 +0000 (0:00:01.453) 0:01:02.887 **** 2026-02-04 04:27:37.690715 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.690738 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.690750 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.690761 | orchestrator | 2026-02-04 04:27:37.690773 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-04 04:27:37.690784 | orchestrator | Wednesday 04 February 2026 04:27:21 +0000 (0:00:01.671) 0:01:04.559 **** 2026-02-04 04:27:37.690795 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 04:27:37.690807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 04:27:37.690817 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 04:27:37.690828 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.690839 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 04:27:37.690849 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 04:27:37.690884 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 04:27:37.690896 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.690907 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 04:27:37.690917 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 04:27:37.690928 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 04:27:37.690938 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.690949 | orchestrator | 2026-02-04 04:27:37.690975 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-04 04:27:37.690986 | orchestrator | Wednesday 04 February 2026 04:27:22 +0000 (0:00:01.453) 0:01:06.012 **** 2026-02-04 04:27:37.690997 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691008 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691018 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.691029 | orchestrator | 2026-02-04 04:27:37.691044 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-04 04:27:37.691057 | orchestrator | Wednesday 04 February 2026 04:27:23 +0000 (0:00:01.376) 0:01:07.388 **** 2026-02-04 04:27:37.691069 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691081 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691094 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.691106 | orchestrator | 2026-02-04 04:27:37.691118 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-04 04:27:37.691132 | orchestrator | Wednesday 04 February 2026 04:27:25 +0000 (0:00:01.336) 0:01:08.725 **** 2026-02-04 04:27:37.691144 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691157 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691169 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.691181 | orchestrator | 2026-02-04 04:27:37.691193 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-04 04:27:37.691206 | orchestrator | Wednesday 04 February 2026 04:27:26 +0000 (0:00:01.432) 0:01:10.158 **** 2026-02-04 04:27:37.691218 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691231 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691243 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.691256 | orchestrator | 2026-02-04 04:27:37.691268 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-04 04:27:37.691280 | orchestrator | Wednesday 04 February 2026 04:27:28 +0000 (0:00:01.385) 0:01:11.543 **** 2026-02-04 04:27:37.691292 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691305 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691317 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.691328 | orchestrator | 2026-02-04 04:27:37.691340 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-04 04:27:37.691352 | orchestrator | Wednesday 04 February 2026 04:27:29 +0000 (0:00:01.416) 0:01:12.960 **** 2026-02-04 04:27:37.691365 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691378 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691390 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.691401 | orchestrator | 2026-02-04 04:27:37.691411 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-04 04:27:37.691422 | orchestrator | Wednesday 04 February 2026 04:27:31 +0000 (0:00:01.666) 0:01:14.627 **** 2026-02-04 04:27:37.691432 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691443 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691453 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.691464 | orchestrator | 2026-02-04 04:27:37.691474 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-04 04:27:37.691485 | orchestrator | Wednesday 04 February 2026 04:27:32 +0000 (0:00:01.649) 0:01:16.276 **** 2026-02-04 04:27:37.691496 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691506 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691516 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:37.691535 | orchestrator | 2026-02-04 04:27:37.691546 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-04 04:27:37.691607 | orchestrator | Wednesday 04 February 2026 04:27:34 +0000 (0:00:01.422) 0:01:17.699 **** 2026-02-04 04:27:37.691668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:37.691686 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:37.691698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:37.691718 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:37.691741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:54.587396 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:54.587541 | orchestrator | 2026-02-04 04:27:54.587567 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-04 04:27:54.587702 | orchestrator | Wednesday 04 February 2026 04:27:37 +0000 (0:00:03.469) 0:01:21.169 **** 2026-02-04 04:27:54.587744 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:54.587763 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:54.587781 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:54.587799 | orchestrator | 2026-02-04 04:27:54.587817 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-04 04:27:54.587837 | orchestrator | Wednesday 04 February 2026 04:27:39 +0000 (0:00:01.595) 0:01:22.764 **** 2026-02-04 04:27:54.587864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:54.587924 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:54.587977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:54.588002 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:54.588036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-04 04:27:54.588072 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:54.588092 | orchestrator | 2026-02-04 04:27:54.588114 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-04 04:27:54.588136 | orchestrator | Wednesday 04 February 2026 04:27:42 +0000 (0:00:03.397) 0:01:26.161 **** 2026-02-04 04:27:54.588159 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:54.588181 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:54.588203 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:54.588225 | orchestrator | 2026-02-04 04:27:54.588246 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-04 04:27:54.588265 | orchestrator | Wednesday 04 February 2026 04:27:44 +0000 (0:00:01.718) 0:01:27.880 **** 2026-02-04 04:27:54.588285 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:54.588305 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:54.588324 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:54.588342 | orchestrator | 2026-02-04 04:27:54.588360 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-04 04:27:54.588379 | orchestrator | Wednesday 04 February 2026 04:27:45 +0000 (0:00:01.485) 0:01:29.365 **** 2026-02-04 04:27:54.588396 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:54.588414 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:54.588431 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:54.588450 | orchestrator | 2026-02-04 04:27:54.588467 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-04 04:27:54.588485 | orchestrator | Wednesday 04 February 2026 04:27:47 +0000 (0:00:01.482) 0:01:30.847 **** 2026-02-04 04:27:54.588503 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:54.588518 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:54.588534 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:54.588550 | orchestrator | 2026-02-04 04:27:54.588566 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-04 04:27:54.588614 | orchestrator | Wednesday 04 February 2026 04:27:49 +0000 (0:00:01.767) 0:01:32.614 **** 2026-02-04 04:27:54.588631 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:27:54.588648 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:27:54.588663 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:27:54.588680 | orchestrator | 2026-02-04 04:27:54.588695 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-04 04:27:54.588712 | orchestrator | Wednesday 04 February 2026 04:27:51 +0000 (0:00:01.965) 0:01:34.579 **** 2026-02-04 04:27:54.588728 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:27:54.588746 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:27:54.588763 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:27:54.588779 | orchestrator | 2026-02-04 04:27:54.588795 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-04 04:27:54.588811 | orchestrator | Wednesday 04 February 2026 04:27:52 +0000 (0:00:01.862) 0:01:36.441 **** 2026-02-04 04:27:54.588827 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:27:54.588843 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:27:54.588859 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:27:54.588875 | orchestrator | 2026-02-04 04:27:54.588891 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-04 04:27:54.588907 | orchestrator | Wednesday 04 February 2026 04:27:54 +0000 (0:00:01.389) 0:01:37.831 **** 2026-02-04 04:27:54.588937 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.226176 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.226309 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.226332 | orchestrator | 2026-02-04 04:30:35.226346 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-04 04:30:35.226384 | orchestrator | Wednesday 04 February 2026 04:27:55 +0000 (0:00:01.406) 0:01:39.238 **** 2026-02-04 04:30:35.226418 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.226430 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.226441 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.226451 | orchestrator | 2026-02-04 04:30:35.226463 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-04 04:30:35.226474 | orchestrator | Wednesday 04 February 2026 04:27:57 +0000 (0:00:02.055) 0:01:41.294 **** 2026-02-04 04:30:35.226485 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.226495 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.226506 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.226516 | orchestrator | 2026-02-04 04:30:35.226527 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-04 04:30:35.226538 | orchestrator | Wednesday 04 February 2026 04:27:59 +0000 (0:00:01.371) 0:01:42.665 **** 2026-02-04 04:30:35.226549 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:30:35.226561 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.226571 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.226582 | orchestrator | 2026-02-04 04:30:35.226594 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-04 04:30:35.226605 | orchestrator | Wednesday 04 February 2026 04:28:00 +0000 (0:00:01.397) 0:01:44.062 **** 2026-02-04 04:30:35.226616 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.226626 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.226637 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.226675 | orchestrator | 2026-02-04 04:30:35.226688 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-04 04:30:35.226701 | orchestrator | Wednesday 04 February 2026 04:28:04 +0000 (0:00:03.555) 0:01:47.618 **** 2026-02-04 04:30:35.226714 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.226727 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.226738 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.226748 | orchestrator | 2026-02-04 04:30:35.226759 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-04 04:30:35.226770 | orchestrator | Wednesday 04 February 2026 04:28:05 +0000 (0:00:01.408) 0:01:49.026 **** 2026-02-04 04:30:35.226781 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.226791 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.226802 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.226813 | orchestrator | 2026-02-04 04:30:35.226823 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-04 04:30:35.226835 | orchestrator | Wednesday 04 February 2026 04:28:06 +0000 (0:00:01.408) 0:01:50.435 **** 2026-02-04 04:30:35.226846 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:30:35.226857 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.226868 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.226879 | orchestrator | 2026-02-04 04:30:35.226890 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 04:30:35.226900 | orchestrator | Wednesday 04 February 2026 04:28:08 +0000 (0:00:01.717) 0:01:52.153 **** 2026-02-04 04:30:35.226911 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:30:35.226922 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.226932 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.226943 | orchestrator | 2026-02-04 04:30:35.226954 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-04 04:30:35.226964 | orchestrator | Wednesday 04 February 2026 04:28:10 +0000 (0:00:01.611) 0:01:53.764 **** 2026-02-04 04:30:35.226975 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:30:35.226986 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.226997 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.227007 | orchestrator | 2026-02-04 04:30:35.227019 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-04 04:30:35.227029 | orchestrator | Wednesday 04 February 2026 04:28:11 +0000 (0:00:01.589) 0:01:55.353 **** 2026-02-04 04:30:35.227041 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:30:35.227065 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:30:35.227076 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:30:35.227087 | orchestrator | 2026-02-04 04:30:35.227097 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-04 04:30:35.227108 | orchestrator | Wednesday 04 February 2026 04:28:13 +0000 (0:00:01.725) 0:01:57.079 **** 2026-02-04 04:30:35.227119 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:30:35.227130 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.227141 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.227151 | orchestrator | 2026-02-04 04:30:35.227162 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-04 04:30:35.227173 | orchestrator | 2026-02-04 04:30:35.227183 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 04:30:35.227194 | orchestrator | Wednesday 04 February 2026 04:28:15 +0000 (0:00:02.070) 0:01:59.149 **** 2026-02-04 04:30:35.227205 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:30:35.227216 | orchestrator | 2026-02-04 04:30:35.227227 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 04:30:35.227238 | orchestrator | Wednesday 04 February 2026 04:28:43 +0000 (0:00:27.440) 0:02:26.590 **** 2026-02-04 04:30:35.227249 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.227260 | orchestrator | 2026-02-04 04:30:35.227270 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 04:30:35.227281 | orchestrator | Wednesday 04 February 2026 04:28:48 +0000 (0:00:05.588) 0:02:32.178 **** 2026-02-04 04:30:35.227292 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.227302 | orchestrator | 2026-02-04 04:30:35.227313 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-04 04:30:35.227324 | orchestrator | 2026-02-04 04:30:35.227335 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 04:30:35.227345 | orchestrator | Wednesday 04 February 2026 04:28:51 +0000 (0:00:03.029) 0:02:35.207 **** 2026-02-04 04:30:35.227356 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:30:35.227367 | orchestrator | 2026-02-04 04:30:35.227378 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 04:30:35.227406 | orchestrator | Wednesday 04 February 2026 04:29:16 +0000 (0:00:25.088) 0:03:00.295 **** 2026-02-04 04:30:35.227418 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.227428 | orchestrator | 2026-02-04 04:30:35.227439 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 04:30:35.227456 | orchestrator | Wednesday 04 February 2026 04:29:22 +0000 (0:00:05.628) 0:03:05.924 **** 2026-02-04 04:30:35.227467 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.227477 | orchestrator | 2026-02-04 04:30:35.227488 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-04 04:30:35.227519 | orchestrator | 2026-02-04 04:30:35.227530 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-04 04:30:35.227541 | orchestrator | Wednesday 04 February 2026 04:29:25 +0000 (0:00:03.003) 0:03:08.928 **** 2026-02-04 04:30:35.227552 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:30:35.227563 | orchestrator | 2026-02-04 04:30:35.227574 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-04 04:30:35.227585 | orchestrator | Wednesday 04 February 2026 04:29:52 +0000 (0:00:26.640) 0:03:35.568 **** 2026-02-04 04:30:35.227595 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-04 04:30:35.227608 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.227619 | orchestrator | 2026-02-04 04:30:35.227629 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-04 04:30:35.227640 | orchestrator | Wednesday 04 February 2026 04:30:00 +0000 (0:00:08.050) 0:03:43.619 **** 2026-02-04 04:30:35.227671 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-04 04:30:35.227689 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-04 04:30:35.227718 | orchestrator | mariadb_bootstrap_restart 2026-02-04 04:30:35.227737 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.227754 | orchestrator | 2026-02-04 04:30:35.227766 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-04 04:30:35.227776 | orchestrator | skipping: no hosts matched 2026-02-04 04:30:35.227787 | orchestrator | 2026-02-04 04:30:35.227798 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-04 04:30:35.227809 | orchestrator | skipping: no hosts matched 2026-02-04 04:30:35.227819 | orchestrator | 2026-02-04 04:30:35.227830 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-04 04:30:35.227841 | orchestrator | 2026-02-04 04:30:35.227851 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-04 04:30:35.227862 | orchestrator | Wednesday 04 February 2026 04:30:04 +0000 (0:00:04.214) 0:03:47.834 **** 2026-02-04 04:30:35.227873 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:30:35.227883 | orchestrator | 2026-02-04 04:30:35.227894 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-04 04:30:35.227905 | orchestrator | Wednesday 04 February 2026 04:30:06 +0000 (0:00:01.949) 0:03:49.784 **** 2026-02-04 04:30:35.227915 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.227926 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.227937 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.227947 | orchestrator | 2026-02-04 04:30:35.227958 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-04 04:30:35.227969 | orchestrator | Wednesday 04 February 2026 04:30:09 +0000 (0:00:03.211) 0:03:52.995 **** 2026-02-04 04:30:35.227979 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.227990 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.228000 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:30:35.228011 | orchestrator | 2026-02-04 04:30:35.228022 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-04 04:30:35.228032 | orchestrator | Wednesday 04 February 2026 04:30:12 +0000 (0:00:03.297) 0:03:56.293 **** 2026-02-04 04:30:35.228043 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.228054 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.228067 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.228085 | orchestrator | 2026-02-04 04:30:35.228101 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-04 04:30:35.228129 | orchestrator | Wednesday 04 February 2026 04:30:16 +0000 (0:00:03.294) 0:03:59.588 **** 2026-02-04 04:30:35.228148 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.228165 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.228182 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:30:35.228199 | orchestrator | 2026-02-04 04:30:35.228216 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-04 04:30:35.228233 | orchestrator | Wednesday 04 February 2026 04:30:19 +0000 (0:00:03.470) 0:04:03.058 **** 2026-02-04 04:30:35.228250 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.228267 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.228285 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.228302 | orchestrator | 2026-02-04 04:30:35.228319 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-04 04:30:35.228338 | orchestrator | Wednesday 04 February 2026 04:30:26 +0000 (0:00:06.940) 0:04:09.999 **** 2026-02-04 04:30:35.228355 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:30:35.228373 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.228391 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.228409 | orchestrator | 2026-02-04 04:30:35.228427 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-04 04:30:35.228445 | orchestrator | Wednesday 04 February 2026 04:30:30 +0000 (0:00:03.568) 0:04:13.569 **** 2026-02-04 04:30:35.228463 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:30:35.228480 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:30:35.228511 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:30:35.228529 | orchestrator | 2026-02-04 04:30:35.228545 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-04 04:30:35.228562 | orchestrator | Wednesday 04 February 2026 04:30:31 +0000 (0:00:01.704) 0:04:15.274 **** 2026-02-04 04:30:35.228579 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:30:35.228596 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:30:35.228613 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:30:35.228630 | orchestrator | 2026-02-04 04:30:35.228648 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-04 04:30:35.228707 | orchestrator | Wednesday 04 February 2026 04:30:35 +0000 (0:00:03.434) 0:04:18.709 **** 2026-02-04 04:30:56.588314 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:30:56.588411 | orchestrator | 2026-02-04 04:30:56.588425 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-04 04:30:56.588451 | orchestrator | Wednesday 04 February 2026 04:30:37 +0000 (0:00:02.045) 0:04:20.755 **** 2026-02-04 04:30:56.588460 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:30:56.588470 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:30:56.588479 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:30:56.588488 | orchestrator | 2026-02-04 04:30:56.588497 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:30:56.588507 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-04 04:30:56.588517 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-04 04:30:56.588526 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-04 04:30:56.588536 | orchestrator | 2026-02-04 04:30:56.588545 | orchestrator | 2026-02-04 04:30:56.588553 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:30:56.588562 | orchestrator | Wednesday 04 February 2026 04:30:56 +0000 (0:00:18.807) 0:04:39.563 **** 2026-02-04 04:30:56.588571 | orchestrator | =============================================================================== 2026-02-04 04:30:56.588580 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 79.17s 2026-02-04 04:30:56.588588 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 19.27s 2026-02-04 04:30:56.588597 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 18.81s 2026-02-04 04:30:56.588605 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ----------------------- 10.25s 2026-02-04 04:30:56.588614 | orchestrator | service-check : mariadb | Get container facts --------------------------- 6.94s 2026-02-04 04:30:56.588622 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.88s 2026-02-04 04:30:56.588631 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.71s 2026-02-04 04:30:56.588640 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 4.29s 2026-02-04 04:30:56.588648 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.15s 2026-02-04 04:30:56.588702 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.07s 2026-02-04 04:30:56.588713 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.97s 2026-02-04 04:30:56.588722 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 3.57s 2026-02-04 04:30:56.588730 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 3.56s 2026-02-04 04:30:56.588739 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.55s 2026-02-04 04:30:56.588748 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.53s 2026-02-04 04:30:56.588757 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 3.47s 2026-02-04 04:30:56.588787 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 3.47s 2026-02-04 04:30:56.588796 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.43s 2026-02-04 04:30:56.588805 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 3.40s 2026-02-04 04:30:56.588813 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 3.30s 2026-02-04 04:30:56.912339 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-04 04:30:58.983445 | orchestrator | 2026-02-04 04:30:58 | INFO  | Task 01a77dcd-4043-4587-b8bb-d5ada140e452 (rabbitmq) was prepared for execution. 2026-02-04 04:30:58.983550 | orchestrator | 2026-02-04 04:30:58 | INFO  | It takes a moment until task 01a77dcd-4043-4587-b8bb-d5ada140e452 (rabbitmq) has been started and output is visible here. 2026-02-04 04:31:43.535346 | orchestrator | 2026-02-04 04:31:43.535462 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 04:31:43.535479 | orchestrator | 2026-02-04 04:31:43.535491 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 04:31:43.535503 | orchestrator | Wednesday 04 February 2026 04:31:04 +0000 (0:00:01.354) 0:00:01.354 **** 2026-02-04 04:31:43.535514 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:31:43.535526 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:31:43.535537 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:31:43.535548 | orchestrator | 2026-02-04 04:31:43.535560 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 04:31:43.535578 | orchestrator | Wednesday 04 February 2026 04:31:06 +0000 (0:00:01.900) 0:00:03.255 **** 2026-02-04 04:31:43.535595 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-04 04:31:43.535607 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-04 04:31:43.535619 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-04 04:31:43.535638 | orchestrator | 2026-02-04 04:31:43.535649 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-04 04:31:43.535660 | orchestrator | 2026-02-04 04:31:43.535671 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 04:31:43.535723 | orchestrator | Wednesday 04 February 2026 04:31:08 +0000 (0:00:01.937) 0:00:05.193 **** 2026-02-04 04:31:43.535735 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:31:43.535747 | orchestrator | 2026-02-04 04:31:43.535774 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-04 04:31:43.535786 | orchestrator | Wednesday 04 February 2026 04:31:11 +0000 (0:00:02.739) 0:00:07.933 **** 2026-02-04 04:31:43.535797 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:31:43.535808 | orchestrator | 2026-02-04 04:31:43.535820 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-04 04:31:43.535830 | orchestrator | Wednesday 04 February 2026 04:31:13 +0000 (0:00:02.362) 0:00:10.295 **** 2026-02-04 04:31:43.535841 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:31:43.535852 | orchestrator | 2026-02-04 04:31:43.535863 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-04 04:31:43.535874 | orchestrator | Wednesday 04 February 2026 04:31:16 +0000 (0:00:03.325) 0:00:13.620 **** 2026-02-04 04:31:43.535885 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:31:43.535896 | orchestrator | 2026-02-04 04:31:43.535907 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-04 04:31:43.535918 | orchestrator | Wednesday 04 February 2026 04:31:27 +0000 (0:00:10.071) 0:00:23.692 **** 2026-02-04 04:31:43.535928 | orchestrator | ok: [testbed-node-0] => { 2026-02-04 04:31:43.535939 | orchestrator |  "changed": false, 2026-02-04 04:31:43.535951 | orchestrator |  "msg": "All assertions passed" 2026-02-04 04:31:43.535962 | orchestrator | } 2026-02-04 04:31:43.535973 | orchestrator | 2026-02-04 04:31:43.535983 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-04 04:31:43.536022 | orchestrator | Wednesday 04 February 2026 04:31:28 +0000 (0:00:01.355) 0:00:25.047 **** 2026-02-04 04:31:43.536042 | orchestrator | ok: [testbed-node-0] => { 2026-02-04 04:31:43.536061 | orchestrator |  "changed": false, 2026-02-04 04:31:43.536079 | orchestrator |  "msg": "All assertions passed" 2026-02-04 04:31:43.536095 | orchestrator | } 2026-02-04 04:31:43.536105 | orchestrator | 2026-02-04 04:31:43.536116 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 04:31:43.536128 | orchestrator | Wednesday 04 February 2026 04:31:30 +0000 (0:00:01.709) 0:00:26.757 **** 2026-02-04 04:31:43.536139 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:31:43.536150 | orchestrator | 2026-02-04 04:31:43.536161 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-04 04:31:43.536171 | orchestrator | Wednesday 04 February 2026 04:31:31 +0000 (0:00:01.880) 0:00:28.638 **** 2026-02-04 04:31:43.536182 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:31:43.536193 | orchestrator | 2026-02-04 04:31:43.536204 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-04 04:31:43.536215 | orchestrator | Wednesday 04 February 2026 04:31:34 +0000 (0:00:02.299) 0:00:30.938 **** 2026-02-04 04:31:43.536226 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:31:43.536237 | orchestrator | 2026-02-04 04:31:43.536248 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-04 04:31:43.536259 | orchestrator | Wednesday 04 February 2026 04:31:37 +0000 (0:00:03.137) 0:00:34.076 **** 2026-02-04 04:31:43.536270 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:31:43.536280 | orchestrator | 2026-02-04 04:31:43.536291 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-04 04:31:43.536302 | orchestrator | Wednesday 04 February 2026 04:31:39 +0000 (0:00:01.861) 0:00:35.938 **** 2026-02-04 04:31:43.536339 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:31:43.536362 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:31:43.536385 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:31:43.536397 | orchestrator | 2026-02-04 04:31:43.536409 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-04 04:31:43.536420 | orchestrator | Wednesday 04 February 2026 04:31:41 +0000 (0:00:01.804) 0:00:37.742 **** 2026-02-04 04:31:43.536432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:31:43.536454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:32:03.080919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:32:03.081046 | orchestrator | 2026-02-04 04:32:03.081063 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-04 04:32:03.081074 | orchestrator | Wednesday 04 February 2026 04:31:43 +0000 (0:00:02.459) 0:00:40.202 **** 2026-02-04 04:32:03.081083 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 04:32:03.081093 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 04:32:03.081115 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-04 04:32:03.081124 | orchestrator | 2026-02-04 04:32:03.081134 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-04 04:32:03.081142 | orchestrator | Wednesday 04 February 2026 04:31:45 +0000 (0:00:02.464) 0:00:42.666 **** 2026-02-04 04:32:03.081151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 04:32:03.081160 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 04:32:03.081169 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-04 04:32:03.081177 | orchestrator | 2026-02-04 04:32:03.081186 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-04 04:32:03.081195 | orchestrator | Wednesday 04 February 2026 04:31:49 +0000 (0:00:03.126) 0:00:45.792 **** 2026-02-04 04:32:03.081204 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 04:32:03.081212 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 04:32:03.081221 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-04 04:32:03.081229 | orchestrator | 2026-02-04 04:32:03.081239 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-04 04:32:03.081247 | orchestrator | Wednesday 04 February 2026 04:31:51 +0000 (0:00:02.422) 0:00:48.215 **** 2026-02-04 04:32:03.081256 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 04:32:03.081264 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 04:32:03.081273 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-04 04:32:03.081282 | orchestrator | 2026-02-04 04:32:03.081291 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-04 04:32:03.081299 | orchestrator | Wednesday 04 February 2026 04:31:53 +0000 (0:00:02.295) 0:00:50.511 **** 2026-02-04 04:32:03.081308 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 04:32:03.081317 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 04:32:03.081325 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-04 04:32:03.081334 | orchestrator | 2026-02-04 04:32:03.081343 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-04 04:32:03.081351 | orchestrator | Wednesday 04 February 2026 04:31:56 +0000 (0:00:02.358) 0:00:52.870 **** 2026-02-04 04:32:03.081360 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 04:32:03.081376 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 04:32:03.081385 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-04 04:32:03.081394 | orchestrator | 2026-02-04 04:32:03.081403 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-04 04:32:03.081411 | orchestrator | Wednesday 04 February 2026 04:31:58 +0000 (0:00:02.562) 0:00:55.432 **** 2026-02-04 04:32:03.081420 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:32:03.081429 | orchestrator | 2026-02-04 04:32:03.081453 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-04 04:32:03.081465 | orchestrator | Wednesday 04 February 2026 04:32:00 +0000 (0:00:01.823) 0:00:57.255 **** 2026-02-04 04:32:03.081483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:32:03.081496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:32:03.081509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:32:03.081526 | orchestrator | 2026-02-04 04:32:03.081537 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-04 04:32:03.081580 | orchestrator | Wednesday 04 February 2026 04:32:02 +0000 (0:00:02.257) 0:00:59.513 **** 2026-02-04 04:32:03.081610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:32:12.151757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:32:12.151898 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:32:12.151923 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:32:12.151938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:32:12.151950 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:32:12.151962 | orchestrator | 2026-02-04 04:32:12.151974 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-04 04:32:12.152010 | orchestrator | Wednesday 04 February 2026 04:32:04 +0000 (0:00:01.437) 0:01:00.951 **** 2026-02-04 04:32:12.152023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:32:12.152064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:32:12.152078 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:32:12.152089 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:32:12.152101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:32:12.152113 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:32:12.152130 | orchestrator | 2026-02-04 04:32:12.152148 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-04 04:32:12.152167 | orchestrator | Wednesday 04 February 2026 04:32:06 +0000 (0:00:01.822) 0:01:02.773 **** 2026-02-04 04:32:12.152186 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:32:12.152206 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:32:12.152226 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:32:12.152260 | orchestrator | 2026-02-04 04:32:12.152278 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-04 04:32:12.152292 | orchestrator | Wednesday 04 February 2026 04:32:09 +0000 (0:00:03.728) 0:01:06.501 **** 2026-02-04 04:32:12.152306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:32:12.152339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:33:57.630469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-04 04:33:57.630615 | orchestrator | 2026-02-04 04:33:57.630642 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-04 04:33:57.630663 | orchestrator | Wednesday 04 February 2026 04:32:12 +0000 (0:00:02.319) 0:01:08.821 **** 2026-02-04 04:33:57.630681 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:33:57.630700 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:33:57.630804 | orchestrator | } 2026-02-04 04:33:57.630825 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:33:57.630876 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:33:57.630896 | orchestrator | } 2026-02-04 04:33:57.630915 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:33:57.630934 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:33:57.630953 | orchestrator | } 2026-02-04 04:33:57.630973 | orchestrator | 2026-02-04 04:33:57.630995 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:33:57.631018 | orchestrator | Wednesday 04 February 2026 04:32:13 +0000 (0:00:01.399) 0:01:10.221 **** 2026-02-04 04:33:57.631044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:33:57.631070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:33:57.631093 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:33:57.631117 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:33:57.631226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-04 04:33:57.631257 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:33:57.631279 | orchestrator | 2026-02-04 04:33:57.631316 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-04 04:33:57.631339 | orchestrator | Wednesday 04 February 2026 04:32:15 +0000 (0:00:02.076) 0:01:12.298 **** 2026-02-04 04:33:57.631359 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:33:57.631380 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:33:57.631400 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:33:57.631418 | orchestrator | 2026-02-04 04:33:57.631437 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 04:33:57.631457 | orchestrator | 2026-02-04 04:33:57.631477 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 04:33:57.631498 | orchestrator | Wednesday 04 February 2026 04:32:17 +0000 (0:00:01.925) 0:01:14.223 **** 2026-02-04 04:33:57.631518 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:33:57.631538 | orchestrator | 2026-02-04 04:33:57.631558 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 04:33:57.631578 | orchestrator | Wednesday 04 February 2026 04:32:19 +0000 (0:00:02.072) 0:01:16.295 **** 2026-02-04 04:33:57.631598 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:33:57.631618 | orchestrator | 2026-02-04 04:33:57.631637 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 04:33:57.631655 | orchestrator | Wednesday 04 February 2026 04:32:30 +0000 (0:00:10.399) 0:01:26.695 **** 2026-02-04 04:33:57.631673 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:33:57.631690 | orchestrator | 2026-02-04 04:33:57.631708 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 04:33:57.631755 | orchestrator | Wednesday 04 February 2026 04:32:39 +0000 (0:00:09.156) 0:01:35.851 **** 2026-02-04 04:33:57.631774 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:33:57.631791 | orchestrator | 2026-02-04 04:33:57.631809 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 04:33:57.631825 | orchestrator | 2026-02-04 04:33:57.631842 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 04:33:57.631859 | orchestrator | Wednesday 04 February 2026 04:32:49 +0000 (0:00:10.520) 0:01:46.371 **** 2026-02-04 04:33:57.631876 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:33:57.631892 | orchestrator | 2026-02-04 04:33:57.631909 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 04:33:57.631926 | orchestrator | Wednesday 04 February 2026 04:32:51 +0000 (0:00:01.678) 0:01:48.050 **** 2026-02-04 04:33:57.631943 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:33:57.631959 | orchestrator | 2026-02-04 04:33:57.631976 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 04:33:57.631992 | orchestrator | Wednesday 04 February 2026 04:32:59 +0000 (0:00:08.559) 0:01:56.609 **** 2026-02-04 04:33:57.632009 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:33:57.632026 | orchestrator | 2026-02-04 04:33:57.632042 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 04:33:57.632058 | orchestrator | Wednesday 04 February 2026 04:33:13 +0000 (0:00:13.893) 0:02:10.503 **** 2026-02-04 04:33:57.632075 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:33:57.632092 | orchestrator | 2026-02-04 04:33:57.632109 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-04 04:33:57.632125 | orchestrator | 2026-02-04 04:33:57.632141 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-04 04:33:57.632158 | orchestrator | Wednesday 04 February 2026 04:33:23 +0000 (0:00:09.595) 0:02:20.098 **** 2026-02-04 04:33:57.632175 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:33:57.632192 | orchestrator | 2026-02-04 04:33:57.632210 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-04 04:33:57.632227 | orchestrator | Wednesday 04 February 2026 04:33:25 +0000 (0:00:01.754) 0:02:21.852 **** 2026-02-04 04:33:57.632244 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:33:57.632262 | orchestrator | 2026-02-04 04:33:57.632280 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-04 04:33:57.632401 | orchestrator | Wednesday 04 February 2026 04:33:34 +0000 (0:00:09.140) 0:02:30.993 **** 2026-02-04 04:33:57.632423 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:33:57.632441 | orchestrator | 2026-02-04 04:33:57.632460 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-04 04:33:57.632478 | orchestrator | Wednesday 04 February 2026 04:33:47 +0000 (0:00:13.681) 0:02:44.674 **** 2026-02-04 04:33:57.632496 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:33:57.632513 | orchestrator | 2026-02-04 04:33:57.632540 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-04 04:33:57.632552 | orchestrator | 2026-02-04 04:33:57.632563 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-04 04:33:57.632590 | orchestrator | Wednesday 04 February 2026 04:33:57 +0000 (0:00:09.615) 0:02:54.290 **** 2026-02-04 04:34:04.161655 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:34:04.161811 | orchestrator | 2026-02-04 04:34:04.161829 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-04 04:34:04.161841 | orchestrator | Wednesday 04 February 2026 04:33:59 +0000 (0:00:01.391) 0:02:55.682 **** 2026-02-04 04:34:04.161852 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:34:04.161864 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:34:04.161875 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:34:04.161886 | orchestrator | 2026-02-04 04:34:04.161898 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:34:04.161910 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 04:34:04.161923 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 04:34:04.161934 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-04 04:34:04.161945 | orchestrator | 2026-02-04 04:34:04.161956 | orchestrator | 2026-02-04 04:34:04.161967 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:34:04.161978 | orchestrator | Wednesday 04 February 2026 04:34:03 +0000 (0:00:04.743) 0:03:00.426 **** 2026-02-04 04:34:04.161989 | orchestrator | =============================================================================== 2026-02-04 04:34:04.162000 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 36.73s 2026-02-04 04:34:04.162011 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 29.73s 2026-02-04 04:34:04.162076 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 28.10s 2026-02-04 04:34:04.162088 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------ 10.07s 2026-02-04 04:34:04.162099 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.51s 2026-02-04 04:34:04.162110 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.74s 2026-02-04 04:34:04.162121 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.73s 2026-02-04 04:34:04.162131 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.33s 2026-02-04 04:34:04.162142 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 3.14s 2026-02-04 04:34:04.162153 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.13s 2026-02-04 04:34:04.162164 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.74s 2026-02-04 04:34:04.162175 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.56s 2026-02-04 04:34:04.162186 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.46s 2026-02-04 04:34:04.162197 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.46s 2026-02-04 04:34:04.162238 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.42s 2026-02-04 04:34:04.162249 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.36s 2026-02-04 04:34:04.162260 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.36s 2026-02-04 04:34:04.162270 | orchestrator | service-check-containers : rabbitmq | Check containers ------------------ 2.32s 2026-02-04 04:34:04.162281 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.30s 2026-02-04 04:34:04.162292 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.30s 2026-02-04 04:34:04.506953 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-04 04:34:06.721766 | orchestrator | 2026-02-04 04:34:06 | INFO  | Task 110fb88d-4c20-44f7-90f5-47e5eda83af1 (openvswitch) was prepared for execution. 2026-02-04 04:34:06.721868 | orchestrator | 2026-02-04 04:34:06 | INFO  | It takes a moment until task 110fb88d-4c20-44f7-90f5-47e5eda83af1 (openvswitch) has been started and output is visible here. 2026-02-04 04:34:34.022898 | orchestrator | 2026-02-04 04:34:34.023025 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 04:34:34.023054 | orchestrator | 2026-02-04 04:34:34.023071 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 04:34:34.023118 | orchestrator | Wednesday 04 February 2026 04:34:12 +0000 (0:00:01.700) 0:00:01.700 **** 2026-02-04 04:34:34.023137 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:34:34.023154 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:34:34.023170 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:34:34.023186 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:34:34.023202 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:34:34.023218 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:34:34.023235 | orchestrator | 2026-02-04 04:34:34.023253 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 04:34:34.023268 | orchestrator | Wednesday 04 February 2026 04:34:15 +0000 (0:00:02.444) 0:00:04.144 **** 2026-02-04 04:34:34.023285 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 04:34:34.023302 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 04:34:34.023339 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 04:34:34.023355 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 04:34:34.023371 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 04:34:34.023385 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-04 04:34:34.023400 | orchestrator | 2026-02-04 04:34:34.023416 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-04 04:34:34.023432 | orchestrator | 2026-02-04 04:34:34.023449 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-04 04:34:34.023464 | orchestrator | Wednesday 04 February 2026 04:34:17 +0000 (0:00:02.337) 0:00:06.482 **** 2026-02-04 04:34:34.023481 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 04:34:34.023499 | orchestrator | 2026-02-04 04:34:34.023517 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-04 04:34:34.023534 | orchestrator | Wednesday 04 February 2026 04:34:21 +0000 (0:00:03.570) 0:00:10.052 **** 2026-02-04 04:34:34.023550 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-04 04:34:34.023568 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-04 04:34:34.023585 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-04 04:34:34.023602 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-04 04:34:34.023620 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-04 04:34:34.023637 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-04 04:34:34.023683 | orchestrator | 2026-02-04 04:34:34.023701 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-04 04:34:34.023745 | orchestrator | Wednesday 04 February 2026 04:34:23 +0000 (0:00:02.411) 0:00:12.464 **** 2026-02-04 04:34:34.023763 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-04 04:34:34.023781 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-04 04:34:34.023797 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-04 04:34:34.023814 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-04 04:34:34.023830 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-04 04:34:34.023845 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-04 04:34:34.023861 | orchestrator | 2026-02-04 04:34:34.023877 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-04 04:34:34.023893 | orchestrator | Wednesday 04 February 2026 04:34:26 +0000 (0:00:02.776) 0:00:15.241 **** 2026-02-04 04:34:34.023908 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-04 04:34:34.023924 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:34:34.023940 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-04 04:34:34.023954 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:34:34.023970 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-04 04:34:34.023987 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:34:34.024002 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-04 04:34:34.024020 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:34:34.024036 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-04 04:34:34.024052 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:34:34.024067 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-04 04:34:34.024083 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:34:34.024099 | orchestrator | 2026-02-04 04:34:34.024115 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-04 04:34:34.024130 | orchestrator | Wednesday 04 February 2026 04:34:29 +0000 (0:00:02.706) 0:00:17.947 **** 2026-02-04 04:34:34.024145 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:34:34.024162 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:34:34.024178 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:34:34.024195 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:34:34.024209 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:34:34.024225 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:34:34.024242 | orchestrator | 2026-02-04 04:34:34.024259 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-04 04:34:34.024275 | orchestrator | Wednesday 04 February 2026 04:34:31 +0000 (0:00:02.176) 0:00:20.123 **** 2026-02-04 04:34:34.024323 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:34.024361 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:34.024395 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:34.024413 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:34.024431 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:34.024450 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:34.024479 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:37.208910 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209029 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209044 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209055 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209066 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209077 | orchestrator | 2026-02-04 04:34:37.209089 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-04 04:34:37.209100 | orchestrator | Wednesday 04 February 2026 04:34:34 +0000 (0:00:02.765) 0:00:22.889 **** 2026-02-04 04:34:37.209133 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209152 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209162 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209172 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209183 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209192 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:37.209220 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910232 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910344 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910361 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910373 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910385 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910419 | orchestrator | 2026-02-04 04:34:42.910448 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-04 04:34:42.910460 | orchestrator | Wednesday 04 February 2026 04:34:38 +0000 (0:00:04.343) 0:00:27.232 **** 2026-02-04 04:34:42.910471 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:34:42.910483 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:34:42.910493 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:34:42.910504 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:34:42.910514 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:34:42.910525 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:34:42.910536 | orchestrator | 2026-02-04 04:34:42.910547 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-04 04:34:42.910574 | orchestrator | Wednesday 04 February 2026 04:34:40 +0000 (0:00:02.433) 0:00:29.666 **** 2026-02-04 04:34:42.910587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910648 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:42.910669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-04 04:34:47.011681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:47.011846 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:47.011864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:47.011895 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:47.011918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:47.011945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-04 04:34:47.011955 | orchestrator | 2026-02-04 04:34:47.011966 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-04 04:34:47.011976 | orchestrator | Wednesday 04 February 2026 04:34:44 +0000 (0:00:03.517) 0:00:33.184 **** 2026-02-04 04:34:47.011985 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:34:47.011995 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:34:47.012004 | orchestrator | } 2026-02-04 04:34:47.012013 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:34:47.012022 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:34:47.012031 | orchestrator | } 2026-02-04 04:34:47.012039 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:34:47.012048 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:34:47.012056 | orchestrator | } 2026-02-04 04:34:47.012065 | orchestrator | changed: [testbed-node-3] => { 2026-02-04 04:34:47.012074 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:34:47.012082 | orchestrator | } 2026-02-04 04:34:47.012091 | orchestrator | changed: [testbed-node-4] => { 2026-02-04 04:34:47.012099 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:34:47.012108 | orchestrator | } 2026-02-04 04:34:47.012117 | orchestrator | changed: [testbed-node-5] => { 2026-02-04 04:34:47.012125 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:34:47.012134 | orchestrator | } 2026-02-04 04:34:47.012143 | orchestrator | 2026-02-04 04:34:47.012152 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:34:47.012161 | orchestrator | Wednesday 04 February 2026 04:34:46 +0000 (0:00:02.216) 0:00:35.401 **** 2026-02-04 04:34:47.012170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-04 04:34:47.012187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-04 04:34:47.012196 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:34:47.012210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-04 04:34:47.012220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-04 04:34:47.012237 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:35:17.633835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-04 04:35:17.633958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-04 04:35:17.634002 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:35:17.634075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-04 04:35:17.634090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-04 04:35:17.634101 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:35:17.634128 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-04 04:35:17.634160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-04 04:35:17.634172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-04 04:35:17.634192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-04 04:35:17.634204 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:35:17.634215 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:35:17.634227 | orchestrator | 2026-02-04 04:35:17.634239 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 04:35:17.634252 | orchestrator | Wednesday 04 February 2026 04:34:49 +0000 (0:00:02.675) 0:00:38.076 **** 2026-02-04 04:35:17.634265 | orchestrator | 2026-02-04 04:35:17.634278 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 04:35:17.634291 | orchestrator | Wednesday 04 February 2026 04:34:49 +0000 (0:00:00.557) 0:00:38.634 **** 2026-02-04 04:35:17.634320 | orchestrator | 2026-02-04 04:35:17.634333 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 04:35:17.634346 | orchestrator | Wednesday 04 February 2026 04:34:50 +0000 (0:00:00.533) 0:00:39.167 **** 2026-02-04 04:35:17.634359 | orchestrator | 2026-02-04 04:35:17.634372 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 04:35:17.634384 | orchestrator | Wednesday 04 February 2026 04:34:50 +0000 (0:00:00.507) 0:00:39.675 **** 2026-02-04 04:35:17.634397 | orchestrator | 2026-02-04 04:35:17.634410 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 04:35:17.634423 | orchestrator | Wednesday 04 February 2026 04:34:51 +0000 (0:00:00.729) 0:00:40.404 **** 2026-02-04 04:35:17.634436 | orchestrator | 2026-02-04 04:35:17.634449 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-04 04:35:17.634461 | orchestrator | Wednesday 04 February 2026 04:34:52 +0000 (0:00:00.546) 0:00:40.951 **** 2026-02-04 04:35:17.634473 | orchestrator | 2026-02-04 04:35:17.634486 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-04 04:35:17.634499 | orchestrator | Wednesday 04 February 2026 04:34:52 +0000 (0:00:00.905) 0:00:41.856 **** 2026-02-04 04:35:17.634512 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:35:17.634525 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:35:17.634538 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:35:17.634550 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:35:17.634563 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:35:17.634575 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:35:17.634589 | orchestrator | 2026-02-04 04:35:17.634607 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-04 04:35:17.634619 | orchestrator | Wednesday 04 February 2026 04:35:04 +0000 (0:00:11.475) 0:00:53.332 **** 2026-02-04 04:35:17.634630 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:35:17.634641 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:35:17.634652 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:35:17.634663 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:35:17.634677 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:35:17.634694 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:35:17.634712 | orchestrator | 2026-02-04 04:35:17.634753 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-04 04:35:17.634781 | orchestrator | Wednesday 04 February 2026 04:35:06 +0000 (0:00:02.297) 0:00:55.630 **** 2026-02-04 04:35:17.634793 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:35:17.634804 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:35:17.634815 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:35:17.634825 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:35:17.634836 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:35:17.634847 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:35:17.634857 | orchestrator | 2026-02-04 04:35:17.634868 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-04 04:35:17.634889 | orchestrator | Wednesday 04 February 2026 04:35:17 +0000 (0:00:10.870) 0:01:06.500 **** 2026-02-04 04:35:34.042921 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-04 04:35:34.043038 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-04 04:35:34.043055 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-04 04:35:34.043067 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-04 04:35:34.043078 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-04 04:35:34.043089 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-04 04:35:34.043100 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-04 04:35:34.043111 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-04 04:35:34.043122 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-04 04:35:34.043133 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-04 04:35:34.043143 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-04 04:35:34.043154 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-04 04:35:34.043165 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 04:35:34.043176 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 04:35:34.043187 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 04:35:34.043198 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 04:35:34.043209 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 04:35:34.043220 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-04 04:35:34.043232 | orchestrator | 2026-02-04 04:35:34.043244 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-04 04:35:34.043256 | orchestrator | Wednesday 04 February 2026 04:35:25 +0000 (0:00:07.854) 0:01:14.355 **** 2026-02-04 04:35:34.043267 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-04 04:35:34.043293 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:35:34.043305 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-04 04:35:34.043326 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:35:34.043337 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-04 04:35:34.043348 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:35:34.043359 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-04 04:35:34.043394 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-04 04:35:34.043406 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-04 04:35:34.043416 | orchestrator | 2026-02-04 04:35:34.043427 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-04 04:35:34.043439 | orchestrator | Wednesday 04 February 2026 04:35:28 +0000 (0:00:03.482) 0:01:17.837 **** 2026-02-04 04:35:34.043452 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-04 04:35:34.043466 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:35:34.043479 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-04 04:35:34.043492 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:35:34.043505 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-04 04:35:34.043518 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:35:34.043548 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-04 04:35:34.043561 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-04 04:35:34.043574 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-04 04:35:34.043587 | orchestrator | 2026-02-04 04:35:34.043600 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:35:34.043614 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 04:35:34.043629 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 04:35:34.043643 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-04 04:35:34.043656 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:35:34.043687 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:35:34.043701 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-04 04:35:34.043713 | orchestrator | 2026-02-04 04:35:34.043726 | orchestrator | 2026-02-04 04:35:34.043759 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:35:34.043772 | orchestrator | Wednesday 04 February 2026 04:35:33 +0000 (0:00:04.576) 0:01:22.413 **** 2026-02-04 04:35:34.043786 | orchestrator | =============================================================================== 2026-02-04 04:35:34.043799 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.48s 2026-02-04 04:35:34.043810 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.87s 2026-02-04 04:35:34.043821 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.86s 2026-02-04 04:35:34.043832 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.58s 2026-02-04 04:35:34.043842 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.34s 2026-02-04 04:35:34.043853 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.78s 2026-02-04 04:35:34.043864 | orchestrator | openvswitch : include_tasks --------------------------------------------- 3.57s 2026-02-04 04:35:34.043875 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.52s 2026-02-04 04:35:34.043885 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.48s 2026-02-04 04:35:34.043896 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.78s 2026-02-04 04:35:34.043907 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.77s 2026-02-04 04:35:34.043927 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.71s 2026-02-04 04:35:34.043938 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.68s 2026-02-04 04:35:34.043948 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.44s 2026-02-04 04:35:34.043959 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.43s 2026-02-04 04:35:34.043970 | orchestrator | module-load : Load modules ---------------------------------------------- 2.41s 2026-02-04 04:35:34.043980 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.34s 2026-02-04 04:35:34.043992 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.30s 2026-02-04 04:35:34.044003 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.22s 2026-02-04 04:35:34.044014 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.18s 2026-02-04 04:35:34.381655 | orchestrator | + osism apply -a upgrade ovn 2026-02-04 04:35:36.488113 | orchestrator | 2026-02-04 04:35:36 | INFO  | Task c4d3268e-1b6f-431f-bc0b-1791088c1cb1 (ovn) was prepared for execution. 2026-02-04 04:35:36.488211 | orchestrator | 2026-02-04 04:35:36 | INFO  | It takes a moment until task c4d3268e-1b6f-431f-bc0b-1791088c1cb1 (ovn) has been started and output is visible here. 2026-02-04 04:35:59.892710 | orchestrator | 2026-02-04 04:35:59.892875 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-04 04:35:59.892899 | orchestrator | 2026-02-04 04:35:59.892917 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-04 04:35:59.892934 | orchestrator | Wednesday 04 February 2026 04:35:42 +0000 (0:00:01.379) 0:00:01.379 **** 2026-02-04 04:35:59.892950 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:35:59.892967 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:35:59.892985 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:35:59.893000 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:35:59.893016 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:35:59.893031 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:35:59.893047 | orchestrator | 2026-02-04 04:35:59.893062 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-04 04:35:59.893079 | orchestrator | Wednesday 04 February 2026 04:35:45 +0000 (0:00:03.540) 0:00:04.920 **** 2026-02-04 04:35:59.893097 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-04 04:35:59.893135 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-04 04:35:59.893154 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-04 04:35:59.893171 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-04 04:35:59.893188 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-04 04:35:59.893205 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-04 04:35:59.893222 | orchestrator | 2026-02-04 04:35:59.893239 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-04 04:35:59.893251 | orchestrator | 2026-02-04 04:35:59.893263 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-04 04:35:59.893281 | orchestrator | Wednesday 04 February 2026 04:35:49 +0000 (0:00:03.252) 0:00:08.172 **** 2026-02-04 04:35:59.893299 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 04:35:59.893317 | orchestrator | 2026-02-04 04:35:59.893333 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-04 04:35:59.893350 | orchestrator | Wednesday 04 February 2026 04:35:52 +0000 (0:00:03.059) 0:00:11.232 **** 2026-02-04 04:35:59.893369 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893421 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893442 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893459 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893477 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893519 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893539 | orchestrator | 2026-02-04 04:35:59.893556 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-04 04:35:59.893574 | orchestrator | Wednesday 04 February 2026 04:35:54 +0000 (0:00:02.195) 0:00:13.427 **** 2026-02-04 04:35:59.893601 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893620 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893635 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893663 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893679 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893697 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893715 | orchestrator | 2026-02-04 04:35:59.893731 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-04 04:35:59.893771 | orchestrator | Wednesday 04 February 2026 04:35:57 +0000 (0:00:03.331) 0:00:16.759 **** 2026-02-04 04:35:59.893782 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893792 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:35:59.893812 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694110 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694220 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694285 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694300 | orchestrator | 2026-02-04 04:36:07.694313 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-04 04:36:07.694325 | orchestrator | Wednesday 04 February 2026 04:35:59 +0000 (0:00:02.243) 0:00:19.003 **** 2026-02-04 04:36:07.694337 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694349 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694360 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694371 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694382 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694412 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694424 | orchestrator | 2026-02-04 04:36:07.694436 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-04 04:36:07.694446 | orchestrator | Wednesday 04 February 2026 04:36:03 +0000 (0:00:03.146) 0:00:22.149 **** 2026-02-04 04:36:07.694465 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:36:07.694545 | orchestrator | 2026-02-04 04:36:07.694556 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-04 04:36:07.694571 | orchestrator | Wednesday 04 February 2026 04:36:05 +0000 (0:00:02.562) 0:00:24.712 **** 2026-02-04 04:36:07.694584 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:36:07.694597 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:36:07.694610 | orchestrator | } 2026-02-04 04:36:07.694624 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:36:07.694637 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:36:07.694650 | orchestrator | } 2026-02-04 04:36:07.694663 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:36:07.694675 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:36:07.694689 | orchestrator | } 2026-02-04 04:36:07.694702 | orchestrator | changed: [testbed-node-3] => { 2026-02-04 04:36:07.694713 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:36:07.694724 | orchestrator | } 2026-02-04 04:36:07.694766 | orchestrator | changed: [testbed-node-4] => { 2026-02-04 04:36:07.694779 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:36:07.694790 | orchestrator | } 2026-02-04 04:36:07.694801 | orchestrator | changed: [testbed-node-5] => { 2026-02-04 04:36:07.694812 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:36:07.694823 | orchestrator | } 2026-02-04 04:36:07.694834 | orchestrator | 2026-02-04 04:36:07.694852 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:36:07.694864 | orchestrator | Wednesday 04 February 2026 04:36:07 +0000 (0:00:01.980) 0:00:26.693 **** 2026-02-04 04:36:07.694884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:36:38.102213 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:36:38.102350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:36:38.102374 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:36:38.102388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:36:38.102400 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:36:38.102412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:36:38.102424 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:36:38.102435 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:36:38.102446 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:36:38.102458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:36:38.102469 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:36:38.102480 | orchestrator | 2026-02-04 04:36:38.102492 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-04 04:36:38.102504 | orchestrator | Wednesday 04 February 2026 04:36:10 +0000 (0:00:02.493) 0:00:29.186 **** 2026-02-04 04:36:38.102515 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:36:38.102527 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:36:38.102538 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:36:38.102549 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:36:38.102559 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:36:38.102570 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:36:38.102603 | orchestrator | 2026-02-04 04:36:38.102615 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-04 04:36:38.102626 | orchestrator | Wednesday 04 February 2026 04:36:13 +0000 (0:00:03.754) 0:00:32.941 **** 2026-02-04 04:36:38.102637 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-04 04:36:38.102648 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-04 04:36:38.102659 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-04 04:36:38.102669 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-04 04:36:38.102680 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-04 04:36:38.102691 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-04 04:36:38.102702 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 04:36:38.102712 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 04:36:38.102723 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 04:36:38.102733 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 04:36:38.102797 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 04:36:38.102829 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-04 04:36:38.102850 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-04 04:36:38.102866 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-04 04:36:38.102879 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-04 04:36:38.102895 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-04 04:36:38.102914 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-04 04:36:38.102942 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-04 04:36:38.102963 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 04:36:38.102981 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 04:36:38.102998 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 04:36:38.103016 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 04:36:38.103033 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 04:36:38.103051 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-04 04:36:38.103069 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 04:36:38.103088 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 04:36:38.103106 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 04:36:38.103123 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 04:36:38.103140 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 04:36:38.103171 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-04 04:36:38.103188 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 04:36:38.103205 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 04:36:38.103223 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 04:36:38.103241 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 04:36:38.103260 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 04:36:38.103277 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-04 04:36:38.103296 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 04:36:38.103309 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 04:36:38.103320 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 04:36:38.103331 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-04 04:36:38.103342 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 04:36:38.103352 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-04 04:36:38.103364 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-04 04:36:38.103383 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-04 04:36:38.103394 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-04 04:36:38.103405 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-04 04:36:38.103415 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-04 04:36:38.103437 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-04 04:39:27.212226 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 04:39:27.212358 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 04:39:27.212383 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-04 04:39:27.212399 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 04:39:27.212416 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 04:39:27.212430 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-04 04:39:27.212444 | orchestrator | 2026-02-04 04:39:27.212460 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 04:39:27.212475 | orchestrator | Wednesday 04 February 2026 04:36:34 +0000 (0:00:21.150) 0:00:54.092 **** 2026-02-04 04:39:27.212489 | orchestrator | 2026-02-04 04:39:27.212503 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 04:39:27.212546 | orchestrator | Wednesday 04 February 2026 04:36:35 +0000 (0:00:00.465) 0:00:54.557 **** 2026-02-04 04:39:27.212559 | orchestrator | 2026-02-04 04:39:27.212573 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 04:39:27.212587 | orchestrator | Wednesday 04 February 2026 04:36:35 +0000 (0:00:00.463) 0:00:55.021 **** 2026-02-04 04:39:27.212602 | orchestrator | 2026-02-04 04:39:27.212616 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 04:39:27.212630 | orchestrator | Wednesday 04 February 2026 04:36:36 +0000 (0:00:00.451) 0:00:55.473 **** 2026-02-04 04:39:27.212645 | orchestrator | 2026-02-04 04:39:27.212659 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 04:39:27.212670 | orchestrator | Wednesday 04 February 2026 04:36:36 +0000 (0:00:00.446) 0:00:55.919 **** 2026-02-04 04:39:27.212679 | orchestrator | 2026-02-04 04:39:27.212688 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-04 04:39:27.212696 | orchestrator | Wednesday 04 February 2026 04:36:37 +0000 (0:00:00.463) 0:00:56.383 **** 2026-02-04 04:39:27.212705 | orchestrator | 2026-02-04 04:39:27.212713 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-04 04:39:27.212721 | orchestrator | Wednesday 04 February 2026 04:36:38 +0000 (0:00:00.797) 0:00:57.180 **** 2026-02-04 04:39:27.212730 | orchestrator | 2026-02-04 04:39:27.212738 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-04 04:39:27.212748 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:39:27.212757 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:39:27.212816 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:39:27.212826 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:39:27.212835 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:39:27.212843 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:39:27.212852 | orchestrator | 2026-02-04 04:39:27.212861 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-04 04:39:27.212869 | orchestrator | 2026-02-04 04:39:27.212878 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 04:39:27.212887 | orchestrator | Wednesday 04 February 2026 04:38:49 +0000 (0:02:11.612) 0:03:08.793 **** 2026-02-04 04:39:27.212895 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:39:27.212904 | orchestrator | 2026-02-04 04:39:27.212912 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 04:39:27.212921 | orchestrator | Wednesday 04 February 2026 04:38:51 +0000 (0:00:01.949) 0:03:10.742 **** 2026-02-04 04:39:27.212930 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-04 04:39:27.212939 | orchestrator | 2026-02-04 04:39:27.212947 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-04 04:39:27.212956 | orchestrator | Wednesday 04 February 2026 04:38:53 +0000 (0:00:02.013) 0:03:12.756 **** 2026-02-04 04:39:27.212965 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.212975 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.212984 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.212992 | orchestrator | 2026-02-04 04:39:27.213001 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-04 04:39:27.213009 | orchestrator | Wednesday 04 February 2026 04:38:55 +0000 (0:00:01.881) 0:03:14.638 **** 2026-02-04 04:39:27.213018 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213026 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213035 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213043 | orchestrator | 2026-02-04 04:39:27.213052 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-04 04:39:27.213061 | orchestrator | Wednesday 04 February 2026 04:38:57 +0000 (0:00:01.709) 0:03:16.348 **** 2026-02-04 04:39:27.213069 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213078 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213095 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213104 | orchestrator | 2026-02-04 04:39:27.213112 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-04 04:39:27.213121 | orchestrator | Wednesday 04 February 2026 04:38:58 +0000 (0:00:01.470) 0:03:17.818 **** 2026-02-04 04:39:27.213129 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213138 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213146 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213155 | orchestrator | 2026-02-04 04:39:27.213163 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-04 04:39:27.213172 | orchestrator | Wednesday 04 February 2026 04:39:00 +0000 (0:00:01.746) 0:03:19.564 **** 2026-02-04 04:39:27.213180 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213207 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213216 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213225 | orchestrator | 2026-02-04 04:39:27.213242 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-04 04:39:27.213251 | orchestrator | Wednesday 04 February 2026 04:39:02 +0000 (0:00:01.630) 0:03:21.195 **** 2026-02-04 04:39:27.213260 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:39:27.213269 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:39:27.213278 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:39:27.213287 | orchestrator | 2026-02-04 04:39:27.213295 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-04 04:39:27.213304 | orchestrator | Wednesday 04 February 2026 04:39:03 +0000 (0:00:01.441) 0:03:22.636 **** 2026-02-04 04:39:27.213313 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213321 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213330 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213338 | orchestrator | 2026-02-04 04:39:27.213347 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-04 04:39:27.213356 | orchestrator | Wednesday 04 February 2026 04:39:05 +0000 (0:00:01.781) 0:03:24.418 **** 2026-02-04 04:39:27.213364 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213373 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213381 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213390 | orchestrator | 2026-02-04 04:39:27.213398 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-04 04:39:27.213407 | orchestrator | Wednesday 04 February 2026 04:39:06 +0000 (0:00:01.661) 0:03:26.080 **** 2026-02-04 04:39:27.213416 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213424 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213432 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213441 | orchestrator | 2026-02-04 04:39:27.213449 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-04 04:39:27.213458 | orchestrator | Wednesday 04 February 2026 04:39:08 +0000 (0:00:01.826) 0:03:27.906 **** 2026-02-04 04:39:27.213467 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213475 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213484 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213492 | orchestrator | 2026-02-04 04:39:27.213501 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-04 04:39:27.213509 | orchestrator | Wednesday 04 February 2026 04:39:10 +0000 (0:00:01.355) 0:03:29.262 **** 2026-02-04 04:39:27.213518 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:39:27.213527 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:39:27.213535 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:39:27.213544 | orchestrator | 2026-02-04 04:39:27.213552 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-04 04:39:27.213561 | orchestrator | Wednesday 04 February 2026 04:39:11 +0000 (0:00:01.433) 0:03:30.695 **** 2026-02-04 04:39:27.213570 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:39:27.213579 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:39:27.213588 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:39:27.213602 | orchestrator | 2026-02-04 04:39:27.213616 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-04 04:39:27.213638 | orchestrator | Wednesday 04 February 2026 04:39:12 +0000 (0:00:01.388) 0:03:32.084 **** 2026-02-04 04:39:27.213675 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213684 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213693 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213701 | orchestrator | 2026-02-04 04:39:27.213710 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-04 04:39:27.213718 | orchestrator | Wednesday 04 February 2026 04:39:14 +0000 (0:00:01.814) 0:03:33.898 **** 2026-02-04 04:39:27.213727 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213736 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213744 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213753 | orchestrator | 2026-02-04 04:39:27.213761 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-04 04:39:27.213788 | orchestrator | Wednesday 04 February 2026 04:39:16 +0000 (0:00:01.388) 0:03:35.287 **** 2026-02-04 04:39:27.213797 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213806 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213814 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213822 | orchestrator | 2026-02-04 04:39:27.213831 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-04 04:39:27.213840 | orchestrator | Wednesday 04 February 2026 04:39:18 +0000 (0:00:02.152) 0:03:37.440 **** 2026-02-04 04:39:27.213848 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:39:27.213856 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:39:27.213865 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:39:27.213873 | orchestrator | 2026-02-04 04:39:27.213882 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-04 04:39:27.213890 | orchestrator | Wednesday 04 February 2026 04:39:19 +0000 (0:00:01.424) 0:03:38.864 **** 2026-02-04 04:39:27.213899 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:39:27.213907 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:39:27.213916 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:39:27.213924 | orchestrator | 2026-02-04 04:39:27.213933 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-04 04:39:27.213941 | orchestrator | Wednesday 04 February 2026 04:39:21 +0000 (0:00:01.442) 0:03:40.307 **** 2026-02-04 04:39:27.213950 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:39:27.213958 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:39:27.213967 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:39:27.213975 | orchestrator | 2026-02-04 04:39:27.213984 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-04 04:39:27.213992 | orchestrator | Wednesday 04 February 2026 04:39:22 +0000 (0:00:01.722) 0:03:42.030 **** 2026-02-04 04:39:27.214080 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361401 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361515 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361558 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361573 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361585 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361596 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:33.361655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:33.361689 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:33.361713 | orchestrator | 2026-02-04 04:39:33.361726 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-04 04:39:33.361739 | orchestrator | Wednesday 04 February 2026 04:39:27 +0000 (0:00:04.294) 0:03:46.325 **** 2026-02-04 04:39:33.361751 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361833 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361849 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361867 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:33.361888 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044233 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044346 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:48.044388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:48.044421 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:48.044486 | orchestrator | 2026-02-04 04:39:48.044498 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-04 04:39:48.044509 | orchestrator | Wednesday 04 February 2026 04:39:33 +0000 (0:00:06.143) 0:03:52.468 **** 2026-02-04 04:39:48.044519 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-04 04:39:48.044528 | orchestrator | 2026-02-04 04:39:48.044537 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-04 04:39:48.044545 | orchestrator | Wednesday 04 February 2026 04:39:35 +0000 (0:00:01.976) 0:03:54.445 **** 2026-02-04 04:39:48.044554 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:39:48.044564 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:39:48.044586 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:39:48.044595 | orchestrator | 2026-02-04 04:39:48.044604 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-04 04:39:48.044613 | orchestrator | Wednesday 04 February 2026 04:39:37 +0000 (0:00:01.778) 0:03:56.223 **** 2026-02-04 04:39:48.044622 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:39:48.044631 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:39:48.044639 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:39:48.044648 | orchestrator | 2026-02-04 04:39:48.044656 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-04 04:39:48.044665 | orchestrator | Wednesday 04 February 2026 04:39:39 +0000 (0:00:02.631) 0:03:58.855 **** 2026-02-04 04:39:48.044674 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:39:48.044683 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:39:48.044698 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:39:48.044713 | orchestrator | 2026-02-04 04:39:48.044727 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-04 04:39:48.044742 | orchestrator | Wednesday 04 February 2026 04:39:42 +0000 (0:00:02.895) 0:04:01.751 **** 2026-02-04 04:39:48.044758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:48.044891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:52.644012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:52.644173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:39:52.644229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644242 | orchestrator | 2026-02-04 04:39:52.644255 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-04 04:39:52.644268 | orchestrator | Wednesday 04 February 2026 04:39:48 +0000 (0:00:05.393) 0:04:07.144 **** 2026-02-04 04:39:52.644279 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:39:52.644291 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:39:52.644303 | orchestrator | } 2026-02-04 04:39:52.644313 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:39:52.644324 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:39:52.644335 | orchestrator | } 2026-02-04 04:39:52.644345 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:39:52.644356 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:39:52.644367 | orchestrator | } 2026-02-04 04:39:52.644377 | orchestrator | 2026-02-04 04:39:52.644405 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-04 04:39:52.644424 | orchestrator | Wednesday 04 February 2026 04:39:49 +0000 (0:00:01.407) 0:04:08.552 **** 2026-02-04 04:39:52.644446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-04 04:39:52.644658 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-04 04:41:27.155428 | orchestrator | 2026-02-04 04:41:27.155530 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-04 04:41:27.155545 | orchestrator | Wednesday 04 February 2026 04:39:52 +0000 (0:00:03.206) 0:04:11.758 **** 2026-02-04 04:41:27.155555 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-04 04:41:27.155564 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-04 04:41:27.155572 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-04 04:41:27.155580 | orchestrator | 2026-02-04 04:41:27.155589 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-04 04:41:27.155598 | orchestrator | Wednesday 04 February 2026 04:39:55 +0000 (0:00:02.435) 0:04:14.194 **** 2026-02-04 04:41:27.155606 | orchestrator | changed: [testbed-node-0] => { 2026-02-04 04:41:27.155636 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:41:27.155645 | orchestrator | } 2026-02-04 04:41:27.155653 | orchestrator | changed: [testbed-node-1] => { 2026-02-04 04:41:27.155661 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:41:27.155669 | orchestrator | } 2026-02-04 04:41:27.155677 | orchestrator | changed: [testbed-node-2] => { 2026-02-04 04:41:27.155685 | orchestrator |  "msg": "Notifying handlers" 2026-02-04 04:41:27.155692 | orchestrator | } 2026-02-04 04:41:27.155700 | orchestrator | 2026-02-04 04:41:27.155708 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 04:41:27.155716 | orchestrator | Wednesday 04 February 2026 04:39:56 +0000 (0:00:01.479) 0:04:15.674 **** 2026-02-04 04:41:27.155724 | orchestrator | 2026-02-04 04:41:27.155732 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 04:41:27.155740 | orchestrator | Wednesday 04 February 2026 04:39:57 +0000 (0:00:00.466) 0:04:16.140 **** 2026-02-04 04:41:27.155748 | orchestrator | 2026-02-04 04:41:27.155756 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-04 04:41:27.155764 | orchestrator | Wednesday 04 February 2026 04:39:57 +0000 (0:00:00.469) 0:04:16.609 **** 2026-02-04 04:41:27.155772 | orchestrator | 2026-02-04 04:41:27.155846 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-04 04:41:27.155860 | orchestrator | Wednesday 04 February 2026 04:39:58 +0000 (0:00:01.036) 0:04:17.646 **** 2026-02-04 04:41:27.155872 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:41:27.155885 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:41:27.155898 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:41:27.155912 | orchestrator | 2026-02-04 04:41:27.155925 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-04 04:41:27.155938 | orchestrator | Wednesday 04 February 2026 04:40:15 +0000 (0:00:16.869) 0:04:34.516 **** 2026-02-04 04:41:27.155952 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:41:27.155961 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:41:27.155969 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:41:27.155977 | orchestrator | 2026-02-04 04:41:27.155985 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-04 04:41:27.155993 | orchestrator | Wednesday 04 February 2026 04:40:32 +0000 (0:00:16.802) 0:04:51.318 **** 2026-02-04 04:41:27.156001 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-04 04:41:27.156009 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-04 04:41:27.156017 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-04 04:41:27.156025 | orchestrator | 2026-02-04 04:41:27.156032 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-04 04:41:27.156040 | orchestrator | Wednesday 04 February 2026 04:40:48 +0000 (0:00:16.312) 0:05:07.630 **** 2026-02-04 04:41:27.156048 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:41:27.156056 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:41:27.156064 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:41:27.156072 | orchestrator | 2026-02-04 04:41:27.156080 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-04 04:41:27.156088 | orchestrator | Wednesday 04 February 2026 04:41:06 +0000 (0:00:17.751) 0:05:25.382 **** 2026-02-04 04:41:27.156096 | orchestrator | Pausing for 5 seconds 2026-02-04 04:41:27.156104 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:41:27.156112 | orchestrator | 2026-02-04 04:41:27.156120 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-04 04:41:27.156209 | orchestrator | Wednesday 04 February 2026 04:41:12 +0000 (0:00:06.222) 0:05:31.605 **** 2026-02-04 04:41:27.156225 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:41:27.156234 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:41:27.156241 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:41:27.156249 | orchestrator | 2026-02-04 04:41:27.156257 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-04 04:41:27.156265 | orchestrator | Wednesday 04 February 2026 04:41:14 +0000 (0:00:01.859) 0:05:33.464 **** 2026-02-04 04:41:27.156282 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:41:27.156291 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:41:27.156299 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:41:27.156306 | orchestrator | 2026-02-04 04:41:27.156314 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-04 04:41:27.156322 | orchestrator | Wednesday 04 February 2026 04:41:16 +0000 (0:00:01.770) 0:05:35.235 **** 2026-02-04 04:41:27.156330 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:41:27.156338 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:41:27.156346 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:41:27.156354 | orchestrator | 2026-02-04 04:41:27.156362 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-04 04:41:27.156370 | orchestrator | Wednesday 04 February 2026 04:41:18 +0000 (0:00:01.902) 0:05:37.137 **** 2026-02-04 04:41:27.156377 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:41:27.156385 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:41:27.156393 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:41:27.156401 | orchestrator | 2026-02-04 04:41:27.156409 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-04 04:41:27.156417 | orchestrator | Wednesday 04 February 2026 04:41:19 +0000 (0:00:01.870) 0:05:39.008 **** 2026-02-04 04:41:27.156424 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:41:27.156432 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:41:27.156440 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:41:27.156448 | orchestrator | 2026-02-04 04:41:27.156456 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-04 04:41:27.156480 | orchestrator | Wednesday 04 February 2026 04:41:21 +0000 (0:00:01.859) 0:05:40.868 **** 2026-02-04 04:41:27.156489 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:41:27.156497 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:41:27.156505 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:41:27.156513 | orchestrator | 2026-02-04 04:41:27.156521 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-04 04:41:27.156529 | orchestrator | Wednesday 04 February 2026 04:41:23 +0000 (0:00:01.882) 0:05:42.751 **** 2026-02-04 04:41:27.156537 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-04 04:41:27.156544 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-04 04:41:27.156552 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-04 04:41:27.156560 | orchestrator | 2026-02-04 04:41:27.156568 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 04:41:27.156578 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 04:41:27.156587 | orchestrator | testbed-node-1 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-04 04:41:27.156595 | orchestrator | testbed-node-2 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-04 04:41:27.156603 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:41:27.156611 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:41:27.156619 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 04:41:27.156627 | orchestrator | 2026-02-04 04:41:27.156635 | orchestrator | 2026-02-04 04:41:27.156642 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 04:41:27.156650 | orchestrator | Wednesday 04 February 2026 04:41:26 +0000 (0:00:03.068) 0:05:45.820 **** 2026-02-04 04:41:27.156658 | orchestrator | =============================================================================== 2026-02-04 04:41:27.156672 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.61s 2026-02-04 04:41:27.156680 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 21.15s 2026-02-04 04:41:27.156688 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.75s 2026-02-04 04:41:27.156696 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 16.87s 2026-02-04 04:41:27.156703 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.80s 2026-02-04 04:41:27.156711 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 16.31s 2026-02-04 04:41:27.156719 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.22s 2026-02-04 04:41:27.156727 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.14s 2026-02-04 04:41:27.156735 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.39s 2026-02-04 04:41:27.156743 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.29s 2026-02-04 04:41:27.156754 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.75s 2026-02-04 04:41:27.156762 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.54s 2026-02-04 04:41:27.156770 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 3.33s 2026-02-04 04:41:27.156814 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.25s 2026-02-04 04:41:27.156823 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.21s 2026-02-04 04:41:27.156831 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.15s 2026-02-04 04:41:27.156839 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.09s 2026-02-04 04:41:27.156846 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.07s 2026-02-04 04:41:27.156854 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.06s 2026-02-04 04:41:27.156862 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.90s 2026-02-04 04:41:27.476336 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-04 04:41:27.476424 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-04 04:41:27.476437 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-04 04:41:27.485758 | orchestrator | + set -e 2026-02-04 04:41:27.485903 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-04 04:41:27.485921 | orchestrator | ++ export INTERACTIVE=false 2026-02-04 04:41:27.485938 | orchestrator | ++ INTERACTIVE=false 2026-02-04 04:41:27.485954 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-04 04:41:27.485970 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-04 04:41:27.485986 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-04 04:41:29.573204 | orchestrator | 2026-02-04 04:41:29 | INFO  | Task ed888142-6e46-4d0d-ac40-3be204582b2e (ceph-rolling_update) was prepared for execution. 2026-02-04 04:41:29.573292 | orchestrator | 2026-02-04 04:41:29 | INFO  | It takes a moment until task ed888142-6e46-4d0d-ac40-3be204582b2e (ceph-rolling_update) has been started and output is visible here. 2026-02-04 04:42:57.288099 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 04:42:57.288218 | orchestrator | 2.16.14 2026-02-04 04:42:57.288236 | orchestrator | 2026-02-04 04:42:57.288249 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-04 04:42:57.288261 | orchestrator | 2026-02-04 04:42:57.288272 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-04 04:42:57.288283 | orchestrator | Wednesday 04 February 2026 04:41:38 +0000 (0:00:01.866) 0:00:01.866 **** 2026-02-04 04:42:57.288295 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-04 04:42:57.288306 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-04 04:42:57.288317 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-04 04:42:57.288352 | orchestrator | skipping: [localhost] 2026-02-04 04:42:57.288364 | orchestrator | 2026-02-04 04:42:57.288375 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-04 04:42:57.288386 | orchestrator | 2026-02-04 04:42:57.288396 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-04 04:42:57.288407 | orchestrator | Wednesday 04 February 2026 04:41:41 +0000 (0:00:03.004) 0:00:04.871 **** 2026-02-04 04:42:57.288418 | orchestrator | ok: [testbed-node-0] => { 2026-02-04 04:42:57.288429 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 04:42:57.288440 | orchestrator | } 2026-02-04 04:42:57.288451 | orchestrator | ok: [testbed-node-1] => { 2026-02-04 04:42:57.288462 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 04:42:57.288472 | orchestrator | } 2026-02-04 04:42:57.288483 | orchestrator | ok: [testbed-node-2] => { 2026-02-04 04:42:57.288494 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 04:42:57.288504 | orchestrator | } 2026-02-04 04:42:57.288515 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 04:42:57.288525 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 04:42:57.288537 | orchestrator | } 2026-02-04 04:42:57.288548 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 04:42:57.288558 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 04:42:57.288569 | orchestrator | } 2026-02-04 04:42:57.288580 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 04:42:57.288590 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 04:42:57.288601 | orchestrator | } 2026-02-04 04:42:57.288612 | orchestrator | ok: [testbed-manager] => { 2026-02-04 04:42:57.288622 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 04:42:57.288633 | orchestrator | } 2026-02-04 04:42:57.288644 | orchestrator | 2026-02-04 04:42:57.288654 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-04 04:42:57.288665 | orchestrator | Wednesday 04 February 2026 04:41:46 +0000 (0:00:05.201) 0:00:10.073 **** 2026-02-04 04:42:57.288676 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:42:57.288687 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:42:57.288697 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:42:57.288708 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:42:57.288719 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:42:57.288729 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:42:57.288740 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.288751 | orchestrator | 2026-02-04 04:42:57.288762 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-04 04:42:57.288772 | orchestrator | Wednesday 04 February 2026 04:41:53 +0000 (0:00:06.157) 0:00:16.230 **** 2026-02-04 04:42:57.288783 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 04:42:57.288794 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 04:42:57.288805 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:42:57.288865 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 04:42:57.288879 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 04:42:57.288890 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:42:57.288901 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:42:57.288912 | orchestrator | 2026-02-04 04:42:57.288923 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-04 04:42:57.288933 | orchestrator | Wednesday 04 February 2026 04:42:26 +0000 (0:00:33.402) 0:00:49.633 **** 2026-02-04 04:42:57.288944 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.288963 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.288974 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.288984 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.288995 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.289005 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.289016 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.289027 | orchestrator | 2026-02-04 04:42:57.289037 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 04:42:57.289048 | orchestrator | Wednesday 04 February 2026 04:42:28 +0000 (0:00:02.141) 0:00:51.774 **** 2026-02-04 04:42:57.289059 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-04 04:42:57.289072 | orchestrator | 2026-02-04 04:42:57.289083 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 04:42:57.289093 | orchestrator | Wednesday 04 February 2026 04:42:31 +0000 (0:00:02.867) 0:00:54.641 **** 2026-02-04 04:42:57.289104 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.289114 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.289125 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.289136 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.289146 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.289157 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.289168 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.289179 | orchestrator | 2026-02-04 04:42:57.289206 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 04:42:57.289218 | orchestrator | Wednesday 04 February 2026 04:42:33 +0000 (0:00:02.508) 0:00:57.150 **** 2026-02-04 04:42:57.289229 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.289240 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.289250 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.289261 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.289271 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.289281 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.289292 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.289303 | orchestrator | 2026-02-04 04:42:57.289313 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 04:42:57.289324 | orchestrator | Wednesday 04 February 2026 04:42:35 +0000 (0:00:01.899) 0:00:59.049 **** 2026-02-04 04:42:57.289335 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.289345 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.289356 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.289366 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.289377 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.289388 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.289398 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.289409 | orchestrator | 2026-02-04 04:42:57.289419 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 04:42:57.289430 | orchestrator | Wednesday 04 February 2026 04:42:38 +0000 (0:00:02.526) 0:01:01.576 **** 2026-02-04 04:42:57.289441 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.289452 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.289462 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.289473 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.289483 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.289494 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.289505 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.289515 | orchestrator | 2026-02-04 04:42:57.289526 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 04:42:57.289537 | orchestrator | Wednesday 04 February 2026 04:42:40 +0000 (0:00:01.918) 0:01:03.495 **** 2026-02-04 04:42:57.289547 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.289558 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.289569 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.289579 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.289590 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.289608 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.289619 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.289643 | orchestrator | 2026-02-04 04:42:57.289665 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 04:42:57.289676 | orchestrator | Wednesday 04 February 2026 04:42:42 +0000 (0:00:02.165) 0:01:05.661 **** 2026-02-04 04:42:57.289687 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.289698 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.289708 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.289719 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.289735 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.289754 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.289772 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.289791 | orchestrator | 2026-02-04 04:42:57.289808 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 04:42:57.289888 | orchestrator | Wednesday 04 February 2026 04:42:44 +0000 (0:00:02.040) 0:01:07.701 **** 2026-02-04 04:42:57.289909 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:42:57.289928 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:42:57.289939 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:42:57.289950 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:42:57.289960 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:42:57.289971 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:42:57.289982 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:42:57.289993 | orchestrator | 2026-02-04 04:42:57.290004 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 04:42:57.290014 | orchestrator | Wednesday 04 February 2026 04:42:46 +0000 (0:00:02.204) 0:01:09.906 **** 2026-02-04 04:42:57.290094 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.290105 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.290116 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.290127 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.290146 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.290157 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.290168 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.290178 | orchestrator | 2026-02-04 04:42:57.290189 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 04:42:57.290200 | orchestrator | Wednesday 04 February 2026 04:42:48 +0000 (0:00:01.996) 0:01:11.903 **** 2026-02-04 04:42:57.290211 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:42:57.290222 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:42:57.290233 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:42:57.290244 | orchestrator | 2026-02-04 04:42:57.290254 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 04:42:57.290265 | orchestrator | Wednesday 04 February 2026 04:42:50 +0000 (0:00:01.759) 0:01:13.662 **** 2026-02-04 04:42:57.290276 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:42:57.290287 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:42:57.290298 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:42:57.290308 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:42:57.290319 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:42:57.290330 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:42:57.290341 | orchestrator | ok: [testbed-manager] 2026-02-04 04:42:57.290351 | orchestrator | 2026-02-04 04:42:57.290362 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 04:42:57.290373 | orchestrator | Wednesday 04 February 2026 04:42:52 +0000 (0:00:02.097) 0:01:15.760 **** 2026-02-04 04:42:57.290384 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:42:57.290395 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:42:57.290406 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:42:57.290417 | orchestrator | 2026-02-04 04:42:57.290427 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 04:42:57.290449 | orchestrator | Wednesday 04 February 2026 04:42:55 +0000 (0:00:03.275) 0:01:19.036 **** 2026-02-04 04:42:57.290470 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 04:43:20.602800 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 04:43:20.602983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 04:43:20.603001 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:20.603014 | orchestrator | 2026-02-04 04:43:20.603026 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 04:43:20.603039 | orchestrator | Wednesday 04 February 2026 04:42:57 +0000 (0:00:01.436) 0:01:20.472 **** 2026-02-04 04:43:20.603052 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 04:43:20.603066 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 04:43:20.603078 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 04:43:20.603089 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:20.603100 | orchestrator | 2026-02-04 04:43:20.603112 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 04:43:20.603124 | orchestrator | Wednesday 04 February 2026 04:42:59 +0000 (0:00:01.866) 0:01:22.339 **** 2026-02-04 04:43:20.603146 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:20.603169 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:20.603188 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:20.603205 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:20.603217 | orchestrator | 2026-02-04 04:43:20.603246 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 04:43:20.603258 | orchestrator | Wednesday 04 February 2026 04:43:00 +0000 (0:00:01.157) 0:01:23.497 **** 2026-02-04 04:43:20.603272 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd8f725914c3c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 04:42:53.232031', 'end': '2026-02-04 04:42:53.286366', 'delta': '0:00:00.054335', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d8f725914c3c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-04 04:43:20.603327 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e8207b686900', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 04:42:54.067042', 'end': '2026-02-04 04:42:54.110545', 'delta': '0:00:00.043503', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8207b686900'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-04 04:43:20.603344 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c48be97cec44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 04:42:54.624754', 'end': '2026-02-04 04:42:54.666576', 'delta': '0:00:00.041822', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c48be97cec44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-04 04:43:20.603357 | orchestrator | 2026-02-04 04:43:20.603370 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 04:43:20.603383 | orchestrator | Wednesday 04 February 2026 04:43:01 +0000 (0:00:01.179) 0:01:24.676 **** 2026-02-04 04:43:20.603397 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:43:20.603411 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:43:20.603424 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:43:20.603436 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:43:20.603448 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:43:20.603461 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:43:20.603474 | orchestrator | ok: [testbed-manager] 2026-02-04 04:43:20.603487 | orchestrator | 2026-02-04 04:43:20.603500 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 04:43:20.603514 | orchestrator | Wednesday 04 February 2026 04:43:03 +0000 (0:00:02.220) 0:01:26.897 **** 2026-02-04 04:43:20.603526 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:20.603540 | orchestrator | 2026-02-04 04:43:20.603551 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 04:43:20.603562 | orchestrator | Wednesday 04 February 2026 04:43:04 +0000 (0:00:01.245) 0:01:28.142 **** 2026-02-04 04:43:20.603572 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:43:20.603583 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:43:20.603594 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:43:20.603604 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:43:20.603615 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:43:20.603625 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:43:20.603636 | orchestrator | ok: [testbed-manager] 2026-02-04 04:43:20.603648 | orchestrator | 2026-02-04 04:43:20.603667 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 04:43:20.603686 | orchestrator | Wednesday 04 February 2026 04:43:07 +0000 (0:00:02.246) 0:01:30.389 **** 2026-02-04 04:43:20.603705 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:43:20.603723 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-04 04:43:20.603735 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 04:43:20.603746 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-04 04:43:20.603766 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 04:43:20.603777 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 04:43:20.603788 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-04 04:43:20.603798 | orchestrator | 2026-02-04 04:43:20.603809 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 04:43:20.603826 | orchestrator | Wednesday 04 February 2026 04:43:11 +0000 (0:00:04.303) 0:01:34.692 **** 2026-02-04 04:43:20.603881 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:43:20.603894 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:43:20.603905 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:43:20.603915 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:43:20.603926 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:43:20.603937 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:43:20.603947 | orchestrator | ok: [testbed-manager] 2026-02-04 04:43:20.603958 | orchestrator | 2026-02-04 04:43:20.603969 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 04:43:20.603980 | orchestrator | Wednesday 04 February 2026 04:43:13 +0000 (0:00:02.165) 0:01:36.858 **** 2026-02-04 04:43:20.603991 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:20.604002 | orchestrator | 2026-02-04 04:43:20.604012 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 04:43:20.604023 | orchestrator | Wednesday 04 February 2026 04:43:14 +0000 (0:00:01.108) 0:01:37.967 **** 2026-02-04 04:43:20.604034 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:20.604045 | orchestrator | 2026-02-04 04:43:20.604056 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 04:43:20.604066 | orchestrator | Wednesday 04 February 2026 04:43:16 +0000 (0:00:01.266) 0:01:39.233 **** 2026-02-04 04:43:20.604077 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:20.604088 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:20.604099 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:20.604109 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:20.604120 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:20.604131 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:20.604141 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:20.604152 | orchestrator | 2026-02-04 04:43:20.604163 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 04:43:20.604173 | orchestrator | Wednesday 04 February 2026 04:43:18 +0000 (0:00:02.578) 0:01:41.812 **** 2026-02-04 04:43:20.604184 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:20.604195 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:20.604206 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:20.604216 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:20.604227 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:20.604238 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:20.604256 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:31.527239 | orchestrator | 2026-02-04 04:43:31.527370 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 04:43:31.527389 | orchestrator | Wednesday 04 February 2026 04:43:20 +0000 (0:00:01.974) 0:01:43.787 **** 2026-02-04 04:43:31.527401 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:31.527413 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:31.527424 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:31.527435 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:31.527490 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:31.527504 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:31.527515 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:31.527527 | orchestrator | 2026-02-04 04:43:31.527538 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 04:43:31.527549 | orchestrator | Wednesday 04 February 2026 04:43:22 +0000 (0:00:02.143) 0:01:45.931 **** 2026-02-04 04:43:31.527560 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:31.527595 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:31.527607 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:31.527618 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:31.527628 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:31.527639 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:31.527650 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:31.527661 | orchestrator | 2026-02-04 04:43:31.527672 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 04:43:31.527683 | orchestrator | Wednesday 04 February 2026 04:43:24 +0000 (0:00:02.161) 0:01:48.093 **** 2026-02-04 04:43:31.527694 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:31.527705 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:31.527715 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:31.527726 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:31.527737 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:31.527747 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:31.527761 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:31.527774 | orchestrator | 2026-02-04 04:43:31.527786 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 04:43:31.527799 | orchestrator | Wednesday 04 February 2026 04:43:27 +0000 (0:00:02.189) 0:01:50.282 **** 2026-02-04 04:43:31.527811 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:31.527824 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:31.527837 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:31.527889 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:31.527902 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:31.527915 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:31.527928 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:31.527939 | orchestrator | 2026-02-04 04:43:31.527950 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 04:43:31.527961 | orchestrator | Wednesday 04 February 2026 04:43:29 +0000 (0:00:01.919) 0:01:52.201 **** 2026-02-04 04:43:31.527972 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:31.527983 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:31.527994 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:31.528004 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:31.528015 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:31.528025 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:31.528036 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:31.528047 | orchestrator | 2026-02-04 04:43:31.528058 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 04:43:31.528069 | orchestrator | Wednesday 04 February 2026 04:43:31 +0000 (0:00:02.234) 0:01:54.436 **** 2026-02-04 04:43:31.528096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.528111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.528123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.528166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 04:43:31.528181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.528193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.528204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.528226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5c0a15c2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:31.528252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.528273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 04:43:31.646214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50d185a4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:31.646331 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:31.646344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.646414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 04:43:31.646435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.980498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.980604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.980644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '853c0bfc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:31.980712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.980727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.980741 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:31.980773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.980789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e', 'dm-uuid-LVM-BggcAryejjvGBF4uvp6BcYG8cW5k2lInqXUvcrL0euXIKDnaXO5lD17ef9ulmfzT'], 'uuids': ['f158fdb8-bb9c-48fc-8ca9-031d13c41132'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '859f82ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT']}})  2026-02-04 04:43:31.980804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811', 'scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10db325f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:31.980900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-LUqg5q-XQXl-4J84-Fu4r-xNUp-Z07d-jQvh8Z', 'scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388', 'scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9e979b3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f']}})  2026-02-04 04:43:31.980926 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.980940 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:31.980954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:31.980968 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-19-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 04:43:31.980992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.005197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh', 'dm-uuid-CRYPT-LUKS2-2302d1af8aee4d9d86e1dfe7dfc67d39-8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 04:43:32.005263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.005274 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.005294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f', 'dm-uuid-LVM-8XaWcwBldrFACyhn8O8pDrkh8WYfwfMh8YdRgn42SXPKkSSmdqnloX2coya2uTEh'], 'uuids': ['2302d1af-8aee-4d9d-86e1-dfe7dfc67d39'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9e979b3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh']}})  2026-02-04 04:43:32.005321 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843', 'dm-uuid-LVM-GuQppvMqMgPM92HHdmch1RUlEtgMK7bAQGkZWEBmxgWBBqnmby4j6kn1XrU8W6rj'], 'uuids': ['18125888-7064-431c-840e-0a8e7e279804'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '87322fe2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj']}})  2026-02-04 04:43:32.005329 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23', 'scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0d2f838', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:32.005347 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PkP1x1-WFQe-TRGf-2R1c-oEQv-Qw43-IKwaXF', 'scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40', 'scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '859f82ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e']}})  2026-02-04 04:43:32.005357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lVamx9-eYv9-88F9-1eWN-Mo2X-ZvoC-DQM8Qk', 'scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536', 'scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d2cd144', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c']}})  2026-02-04 04:43:32.005364 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.005371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.005388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.005399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 04:43:32.005421 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e5ab81eb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:32.181257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0', 'dm-uuid-CRYPT-LUKS2-81c8120205304967adb7cc6e42b3aaa8-5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c', 'dm-uuid-LVM-jabOFLmF8RS1U4YRftNuTtdThdIFxea35ctI13zu0z0FRbKQORFQtA0W3pu2nuf0'], 'uuids': ['81c81202-0530-4967-adb7-cc6e42b3aaa8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d2cd144', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0']}})  2026-02-04 04:43:32.181436 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Bwhrb-Xrjl-JUvU-1GoK-f7aN-SV93-uYzfRx', 'scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd', 'scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '87322fe2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843']}})  2026-02-04 04:43:32.181473 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT', 'dm-uuid-CRYPT-LUKS2-f158fdb8bb9c48fc8ca9031d13c41132-qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd5a1c69a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:32.181521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.181599 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj', 'dm-uuid-CRYPT-LUKS2-181258887064431c840e0a8e7e279804-QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 04:43:32.399545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.399666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639', 'dm-uuid-LVM-vz2cv2RninoOpnjrAP98IcdUAgz3XBEESK6kemILvNkP1xNIipyazKS9tR60DcmG'], 'uuids': ['b92d0132-23f4-42dc-a584-a78bf3becacb'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3eb80431', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG']}})  2026-02-04 04:43:32.399687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b', 'scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5de00e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:32.399701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Zb3vde-Jb13-PnWs-XBLv-pqCq-xraX-sEUQHY', 'scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675', 'scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aa7bd7a5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af']}})  2026-02-04 04:43:32.399714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.399726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.399738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 04:43:32.399792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.399811 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH', 'dm-uuid-CRYPT-LUKS2-4b38dba5f6644e8da6669b50aa3859a3-VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 04:43:32.399823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.399836 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:32.399902 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af', 'dm-uuid-LVM-jfhjIQs9I12AbVZ4uHpbas8Q8DuoJ56eVvgnpRveGHUC1VWvw0UeAndBY1g45KfH'], 'uuids': ['4b38dba5-f664-4e8d-a666-9b50aa3859a3'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aa7bd7a5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH']}})  2026-02-04 04:43:32.399915 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:32.399927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2LO7pB-3JRT-gNDG-CXHX-CXgP-r5lI-kGILdq', 'scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52', 'scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3eb80431', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639']}})  2026-02-04 04:43:32.399939 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:32.399972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdb44653', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:33.817770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.817928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.817948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG', 'dm-uuid-CRYPT-LUKS2-b92d013223f442dca584a78bf3becacb-SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 04:43:33.817965 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:33.817978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.817990 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.818082 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.818097 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 04:43:33.818123 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.818155 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.818167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.818181 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0e69a1b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:43:33.818211 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.818223 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:43:33.818235 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:33.818246 | orchestrator | 2026-02-04 04:43:33.818263 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 04:43:33.818275 | orchestrator | Wednesday 04 February 2026 04:43:33 +0000 (0:00:02.440) 0:01:56.877 **** 2026-02-04 04:43:33.818297 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953305 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953377 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953384 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953407 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953412 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953427 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953446 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5c0a15c2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953457 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953462 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953469 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:33.953477 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.180947 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181043 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181053 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181060 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181077 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181102 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50d185a4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181116 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181122 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181129 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:34.181141 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181148 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.181159 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322427 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322545 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322564 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322594 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322630 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '853c0bfc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322666 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322678 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322690 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:34.322709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322722 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e', 'dm-uuid-LVM-BggcAryejjvGBF4uvp6BcYG8cW5k2lInqXUvcrL0euXIKDnaXO5lD17ef9ulmfzT'], 'uuids': ['f158fdb8-bb9c-48fc-8ca9-031d13c41132'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '859f82ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.322753 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811', 'scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10db325f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.427934 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-LUqg5q-XQXl-4J84-Fu4r-xNUp-Z07d-jQvh8Z', 'scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388', 'scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9e979b3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428096 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428111 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:34.428125 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-19-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428200 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh', 'dm-uuid-CRYPT-LUKS2-2302d1af8aee4d9d86e1dfe7dfc67d39-8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428222 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428242 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f', 'dm-uuid-LVM-8XaWcwBldrFACyhn8O8pDrkh8WYfwfMh8YdRgn42SXPKkSSmdqnloX2coya2uTEh'], 'uuids': ['2302d1af-8aee-4d9d-86e1-dfe7dfc67d39'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9e979b3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428272 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PkP1x1-WFQe-TRGf-2R1c-oEQv-Qw43-IKwaXF', 'scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40', 'scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '859f82ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.428336 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e5ab81eb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680735 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680772 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680786 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT', 'dm-uuid-CRYPT-LUKS2-f158fdb8bb9c48fc8ca9031d13c41132-qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680800 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843', 'dm-uuid-LVM-GuQppvMqMgPM92HHdmch1RUlEtgMK7bAQGkZWEBmxgWBBqnmby4j6kn1XrU8W6rj'], 'uuids': ['18125888-7064-431c-840e-0a8e7e279804'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '87322fe2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23', 'scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0d2f838', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680902 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lVamx9-eYv9-88F9-1eWN-Mo2X-ZvoC-DQM8Qk', 'scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536', 'scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d2cd144', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680939 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680950 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680962 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.680974 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:34.681002 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0', 'dm-uuid-CRYPT-LUKS2-81c8120205304967adb7cc6e42b3aaa8-5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.776893 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777028 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c', 'dm-uuid-LVM-jabOFLmF8RS1U4YRftNuTtdThdIFxea35ctI13zu0z0FRbKQORFQtA0W3pu2nuf0'], 'uuids': ['81c81202-0530-4967-adb7-cc6e42b3aaa8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d2cd144', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Bwhrb-Xrjl-JUvU-1GoK-f7aN-SV93-uYzfRx', 'scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd', 'scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '87322fe2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777081 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777162 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd5a1c69a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777294 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777326 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777345 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639', 'dm-uuid-LVM-vz2cv2RninoOpnjrAP98IcdUAgz3XBEESK6kemILvNkP1xNIipyazKS9tR60DcmG'], 'uuids': ['b92d0132-23f4-42dc-a584-a78bf3becacb'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3eb80431', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.777408 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b', 'scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5de00e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865522 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Zb3vde-Jb13-PnWs-XBLv-pqCq-xraX-sEUQHY', 'scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675', 'scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aa7bd7a5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865624 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj', 'dm-uuid-CRYPT-LUKS2-181258887064431c840e0a8e7e279804-QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865654 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865681 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865745 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH', 'dm-uuid-CRYPT-LUKS2-4b38dba5f6644e8da6669b50aa3859a3-VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865769 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af', 'dm-uuid-LVM-jfhjIQs9I12AbVZ4uHpbas8Q8DuoJ56eVvgnpRveGHUC1VWvw0UeAndBY1g45KfH'], 'uuids': ['4b38dba5-f664-4e8d-a666-9b50aa3859a3'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aa7bd7a5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865783 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:34.865801 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2LO7pB-3JRT-gNDG-CXHX-CXgP-r5lI-kGILdq', 'scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52', 'scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3eb80431', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639']}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:34.865830 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.108963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdb44653', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109060 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109113 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109145 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109157 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109167 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109178 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109189 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG', 'dm-uuid-CRYPT-LUKS2-b92d013223f442dca584a78bf3becacb-SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109212 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109223 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:36.109239 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:53.448760 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:53.448954 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0e69a1b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:53.449022 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:53.449054 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:43:53.449068 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:53.449081 | orchestrator | 2026-02-04 04:43:53.449093 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 04:43:53.449105 | orchestrator | Wednesday 04 February 2026 04:43:36 +0000 (0:00:02.418) 0:01:59.295 **** 2026-02-04 04:43:53.449116 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:43:53.449128 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:43:53.449138 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:43:53.449149 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:43:53.449160 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:43:53.449171 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:43:53.449182 | orchestrator | ok: [testbed-manager] 2026-02-04 04:43:53.449193 | orchestrator | 2026-02-04 04:43:53.449204 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 04:43:53.449215 | orchestrator | Wednesday 04 February 2026 04:43:38 +0000 (0:00:02.604) 0:02:01.899 **** 2026-02-04 04:43:53.449226 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:43:53.449237 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:43:53.449247 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:43:53.449258 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:43:53.449269 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:43:53.449281 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:43:53.449294 | orchestrator | ok: [testbed-manager] 2026-02-04 04:43:53.449307 | orchestrator | 2026-02-04 04:43:53.449319 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 04:43:53.449332 | orchestrator | Wednesday 04 February 2026 04:43:40 +0000 (0:00:02.003) 0:02:03.903 **** 2026-02-04 04:43:53.449344 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:43:53.449356 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:43:53.449368 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:43:53.449381 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:43:53.449394 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:53.449407 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:43:53.449419 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:43:53.449431 | orchestrator | 2026-02-04 04:43:53.449444 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 04:43:53.449456 | orchestrator | Wednesday 04 February 2026 04:43:43 +0000 (0:00:02.591) 0:02:06.495 **** 2026-02-04 04:43:53.449478 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:53.449490 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:53.449503 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:53.449515 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:53.449528 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:53.449540 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:53.449553 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:53.449565 | orchestrator | 2026-02-04 04:43:53.449577 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 04:43:53.449590 | orchestrator | Wednesday 04 February 2026 04:43:45 +0000 (0:00:02.032) 0:02:08.528 **** 2026-02-04 04:43:53.449602 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:53.449616 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:53.449628 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:53.449640 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:53.449650 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:53.449661 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:53.449672 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-04 04:43:53.449682 | orchestrator | 2026-02-04 04:43:53.449693 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 04:43:53.449704 | orchestrator | Wednesday 04 February 2026 04:43:48 +0000 (0:00:02.871) 0:02:11.399 **** 2026-02-04 04:43:53.449715 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:43:53.449725 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:43:53.449736 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:43:53.449747 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:43:53.449757 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:43:53.449768 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:43:53.449779 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:43:53.449789 | orchestrator | 2026-02-04 04:43:53.449800 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 04:43:53.449811 | orchestrator | Wednesday 04 February 2026 04:43:50 +0000 (0:00:01.886) 0:02:13.286 **** 2026-02-04 04:43:53.449822 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:43:53.449833 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-04 04:43:53.449844 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 04:43:53.449883 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-04 04:43:53.449902 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-04 04:43:53.449913 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 04:43:53.449923 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-04 04:43:53.449934 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-04 04:43:53.449944 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-04 04:43:53.449955 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-04 04:43:53.449965 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-04 04:43:53.449976 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-04 04:43:53.449986 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-04 04:43:53.449997 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-04 04:43:53.450007 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-04 04:43:53.450080 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-04 04:43:53.450094 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-04 04:43:53.450105 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-04 04:43:53.450116 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-04 04:43:53.450126 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-04 04:43:53.450137 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-04 04:43:53.450147 | orchestrator | 2026-02-04 04:43:53.450158 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 04:43:53.450186 | orchestrator | Wednesday 04 February 2026 04:43:53 +0000 (0:00:03.322) 0:02:16.608 **** 2026-02-04 04:44:36.617160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 04:44:36.617278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 04:44:36.617294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 04:44:36.617305 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:44:36.617318 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 04:44:36.617329 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 04:44:36.617340 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 04:44:36.617351 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:44:36.617362 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 04:44:36.617373 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 04:44:36.617384 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 04:44:36.617395 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:44:36.617406 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 04:44:36.617416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 04:44:36.617427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 04:44:36.617438 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.617449 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 04:44:36.617460 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 04:44:36.617471 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 04:44:36.617481 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:44:36.617492 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 04:44:36.617503 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 04:44:36.617514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 04:44:36.617525 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:44:36.617536 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 04:44:36.617547 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-04 04:44:36.617558 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-04 04:44:36.617569 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:44:36.617580 | orchestrator | 2026-02-04 04:44:36.617592 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 04:44:36.617604 | orchestrator | Wednesday 04 February 2026 04:43:55 +0000 (0:00:02.333) 0:02:18.941 **** 2026-02-04 04:44:36.617615 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:44:36.617626 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:44:36.617637 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:44:36.617648 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:44:36.617659 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 04:44:36.617670 | orchestrator | 2026-02-04 04:44:36.617681 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 04:44:36.617694 | orchestrator | Wednesday 04 February 2026 04:43:57 +0000 (0:00:02.069) 0:02:21.011 **** 2026-02-04 04:44:36.617707 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.617719 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:44:36.617733 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:44:36.617745 | orchestrator | 2026-02-04 04:44:36.617758 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 04:44:36.617770 | orchestrator | Wednesday 04 February 2026 04:43:59 +0000 (0:00:01.560) 0:02:22.572 **** 2026-02-04 04:44:36.617784 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.617824 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:44:36.617836 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:44:36.617846 | orchestrator | 2026-02-04 04:44:36.617857 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 04:44:36.617868 | orchestrator | Wednesday 04 February 2026 04:44:00 +0000 (0:00:01.359) 0:02:23.932 **** 2026-02-04 04:44:36.617947 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.617961 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:44:36.617972 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:44:36.617983 | orchestrator | 2026-02-04 04:44:36.618008 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 04:44:36.618082 | orchestrator | Wednesday 04 February 2026 04:44:02 +0000 (0:00:01.353) 0:02:25.286 **** 2026-02-04 04:44:36.618094 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:44:36.618105 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:44:36.618116 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:44:36.618127 | orchestrator | 2026-02-04 04:44:36.618138 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 04:44:36.618149 | orchestrator | Wednesday 04 February 2026 04:44:03 +0000 (0:00:01.529) 0:02:26.815 **** 2026-02-04 04:44:36.618160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 04:44:36.618171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 04:44:36.618182 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 04:44:36.618202 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.618213 | orchestrator | 2026-02-04 04:44:36.618224 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 04:44:36.618235 | orchestrator | Wednesday 04 February 2026 04:44:05 +0000 (0:00:01.750) 0:02:28.566 **** 2026-02-04 04:44:36.618246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 04:44:36.618256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 04:44:36.618267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 04:44:36.618278 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.618289 | orchestrator | 2026-02-04 04:44:36.618300 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 04:44:36.618328 | orchestrator | Wednesday 04 February 2026 04:44:07 +0000 (0:00:01.714) 0:02:30.280 **** 2026-02-04 04:44:36.618340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 04:44:36.618351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 04:44:36.618362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 04:44:36.618373 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.618383 | orchestrator | 2026-02-04 04:44:36.618394 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 04:44:36.618405 | orchestrator | Wednesday 04 February 2026 04:44:08 +0000 (0:00:01.610) 0:02:31.890 **** 2026-02-04 04:44:36.618416 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:44:36.618427 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:44:36.618438 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:44:36.618448 | orchestrator | 2026-02-04 04:44:36.618459 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 04:44:36.618471 | orchestrator | Wednesday 04 February 2026 04:44:10 +0000 (0:00:01.462) 0:02:33.352 **** 2026-02-04 04:44:36.618481 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 04:44:36.618492 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 04:44:36.618503 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 04:44:36.618514 | orchestrator | 2026-02-04 04:44:36.618525 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 04:44:36.618535 | orchestrator | Wednesday 04 February 2026 04:44:11 +0000 (0:00:01.575) 0:02:34.928 **** 2026-02-04 04:44:36.618546 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:44:36.618557 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:44:36.618580 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:44:36.618591 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 04:44:36.618602 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 04:44:36.618612 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 04:44:36.618623 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 04:44:36.618634 | orchestrator | 2026-02-04 04:44:36.618645 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 04:44:36.618656 | orchestrator | Wednesday 04 February 2026 04:44:13 +0000 (0:00:02.067) 0:02:36.996 **** 2026-02-04 04:44:36.618666 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:44:36.618677 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:44:36.618688 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:44:36.618699 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 04:44:36.618710 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 04:44:36.618720 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 04:44:36.618731 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 04:44:36.618742 | orchestrator | 2026-02-04 04:44:36.618753 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-04 04:44:36.618764 | orchestrator | Wednesday 04 February 2026 04:44:17 +0000 (0:00:03.214) 0:02:40.210 **** 2026-02-04 04:44:36.618774 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:44:36.618785 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:44:36.618796 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:44:36.618806 | orchestrator | changed: [testbed-manager] 2026-02-04 04:44:36.618817 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:44:36.618828 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:44:36.618839 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:44:36.618849 | orchestrator | 2026-02-04 04:44:36.618860 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-04 04:44:36.618871 | orchestrator | Wednesday 04 February 2026 04:44:29 +0000 (0:00:12.057) 0:02:52.268 **** 2026-02-04 04:44:36.618903 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:44:36.618920 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:44:36.618931 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:44:36.618942 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.618953 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:44:36.618963 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:44:36.618974 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:44:36.618985 | orchestrator | 2026-02-04 04:44:36.618995 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-04 04:44:36.619006 | orchestrator | Wednesday 04 February 2026 04:44:31 +0000 (0:00:02.420) 0:02:54.688 **** 2026-02-04 04:44:36.619017 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:44:36.619028 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:44:36.619038 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:44:36.619049 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:44:36.619060 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:44:36.619070 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:44:36.619081 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:44:36.619092 | orchestrator | 2026-02-04 04:44:36.619102 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-04 04:44:36.619113 | orchestrator | Wednesday 04 February 2026 04:44:33 +0000 (0:00:02.038) 0:02:56.727 **** 2026-02-04 04:44:36.619132 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:44:36.619157 | orchestrator | changed: [testbed-node-1] 2026-02-04 04:44:36.619191 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:44:36.619217 | orchestrator | changed: [testbed-node-2] 2026-02-04 04:44:36.619233 | orchestrator | changed: [testbed-node-3] 2026-02-04 04:44:36.619251 | orchestrator | changed: [testbed-node-4] 2026-02-04 04:44:36.619270 | orchestrator | changed: [testbed-node-5] 2026-02-04 04:44:36.619286 | orchestrator | 2026-02-04 04:44:36.619314 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-04 04:45:13.636413 | orchestrator | Wednesday 04 February 2026 04:44:36 +0000 (0:00:03.063) 0:02:59.791 **** 2026-02-04 04:45:13.636530 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-04 04:45:13.636547 | orchestrator | 2026-02-04 04:45:13.636560 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-04 04:45:13.636590 | orchestrator | Wednesday 04 February 2026 04:44:39 +0000 (0:00:03.014) 0:03:02.805 **** 2026-02-04 04:45:13.636603 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.636625 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.636637 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.636648 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.636659 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.636670 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.636681 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.636692 | orchestrator | 2026-02-04 04:45:13.636704 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-04 04:45:13.636715 | orchestrator | Wednesday 04 February 2026 04:44:41 +0000 (0:00:01.970) 0:03:04.776 **** 2026-02-04 04:45:13.636726 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.636737 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.636749 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.636760 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.636771 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.636782 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.636793 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.636804 | orchestrator | 2026-02-04 04:45:13.636815 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-04 04:45:13.636826 | orchestrator | Wednesday 04 February 2026 04:44:43 +0000 (0:00:02.178) 0:03:06.955 **** 2026-02-04 04:45:13.636837 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.636848 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.636859 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.636869 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.636880 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.636891 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.636922 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.636934 | orchestrator | 2026-02-04 04:45:13.636945 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-04 04:45:13.636956 | orchestrator | Wednesday 04 February 2026 04:44:45 +0000 (0:00:02.214) 0:03:09.169 **** 2026-02-04 04:45:13.636967 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.636984 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.637002 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.637014 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.637025 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.637035 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.637046 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.637057 | orchestrator | 2026-02-04 04:45:13.637068 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-04 04:45:13.637079 | orchestrator | Wednesday 04 February 2026 04:44:48 +0000 (0:00:02.246) 0:03:11.416 **** 2026-02-04 04:45:13.637090 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.637125 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.637137 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.637147 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.637158 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.637169 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.637179 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.637190 | orchestrator | 2026-02-04 04:45:13.637201 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-04 04:45:13.637213 | orchestrator | Wednesday 04 February 2026 04:44:50 +0000 (0:00:01.879) 0:03:13.295 **** 2026-02-04 04:45:13.637224 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.637235 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.637245 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.637256 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.637267 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.637277 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.637288 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.637299 | orchestrator | 2026-02-04 04:45:13.637310 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-04 04:45:13.637336 | orchestrator | Wednesday 04 February 2026 04:44:52 +0000 (0:00:02.196) 0:03:15.491 **** 2026-02-04 04:45:13.637347 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.637358 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.637369 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.637379 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.637391 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.637402 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.637412 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.637423 | orchestrator | 2026-02-04 04:45:13.637434 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-04 04:45:13.637445 | orchestrator | Wednesday 04 February 2026 04:44:54 +0000 (0:00:02.017) 0:03:17.509 **** 2026-02-04 04:45:13.637455 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.637466 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.637476 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.637487 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.637498 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.637508 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.637519 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.637530 | orchestrator | 2026-02-04 04:45:13.637541 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-04 04:45:13.637552 | orchestrator | Wednesday 04 February 2026 04:44:56 +0000 (0:00:02.265) 0:03:19.775 **** 2026-02-04 04:45:13.637562 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.637573 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.637584 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.637594 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.637605 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.637633 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.637645 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.637656 | orchestrator | 2026-02-04 04:45:13.637666 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-04 04:45:13.637677 | orchestrator | Wednesday 04 February 2026 04:44:58 +0000 (0:00:02.107) 0:03:21.882 **** 2026-02-04 04:45:13.637688 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.637698 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.637709 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.637720 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.637736 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.637751 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.637762 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.637773 | orchestrator | 2026-02-04 04:45:13.637792 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-04 04:45:13.637803 | orchestrator | Wednesday 04 February 2026 04:45:00 +0000 (0:00:01.982) 0:03:23.865 **** 2026-02-04 04:45:13.637814 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.637824 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.637835 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.637846 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.637857 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.637867 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.637878 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.637982 | orchestrator | 2026-02-04 04:45:13.637997 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-04 04:45:13.638008 | orchestrator | Wednesday 04 February 2026 04:45:02 +0000 (0:00:02.096) 0:03:25.962 **** 2026-02-04 04:45:13.638083 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.638095 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.638106 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.638117 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.638128 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.638139 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.638149 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.638160 | orchestrator | 2026-02-04 04:45:13.638171 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-04 04:45:13.638182 | orchestrator | Wednesday 04 February 2026 04:45:04 +0000 (0:00:02.220) 0:03:28.182 **** 2026-02-04 04:45:13.638193 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.638204 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.638214 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.638227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 04:45:13.638239 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 04:45:13.638250 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.638261 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 04:45:13.638272 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 04:45:13.638283 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.638294 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 04:45:13.638305 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 04:45:13.638315 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.638326 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.638337 | orchestrator | 2026-02-04 04:45:13.638348 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-04 04:45:13.638359 | orchestrator | Wednesday 04 February 2026 04:45:07 +0000 (0:00:02.478) 0:03:30.660 **** 2026-02-04 04:45:13.638377 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.638388 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.638399 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.638409 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.638420 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.638431 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.638442 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.638452 | orchestrator | 2026-02-04 04:45:13.638463 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-04 04:45:13.638484 | orchestrator | Wednesday 04 February 2026 04:45:09 +0000 (0:00:01.962) 0:03:32.622 **** 2026-02-04 04:45:13.638495 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.638506 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.638516 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.638527 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.638538 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.638549 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.638560 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.638570 | orchestrator | 2026-02-04 04:45:13.638581 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-04 04:45:13.638592 | orchestrator | Wednesday 04 February 2026 04:45:11 +0000 (0:00:02.199) 0:03:34.822 **** 2026-02-04 04:45:13.638603 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:13.638614 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:13.638625 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:13.638635 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:13.638646 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:13.638657 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:13.638668 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:13.638693 | orchestrator | 2026-02-04 04:45:13.638725 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-04 04:45:35.558779 | orchestrator | Wednesday 04 February 2026 04:45:13 +0000 (0:00:01.997) 0:03:36.819 **** 2026-02-04 04:45:35.558904 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:35.558981 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:35.559003 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:35.559022 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.559041 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.559061 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.559080 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:35.559101 | orchestrator | 2026-02-04 04:45:35.559121 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-04 04:45:35.559140 | orchestrator | Wednesday 04 February 2026 04:45:15 +0000 (0:00:02.370) 0:03:39.190 **** 2026-02-04 04:45:35.559152 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:35.559163 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:35.559174 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:35.559184 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.559196 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.559207 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.559218 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:35.559229 | orchestrator | 2026-02-04 04:45:35.559239 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-04 04:45:35.559250 | orchestrator | Wednesday 04 February 2026 04:45:18 +0000 (0:00:02.100) 0:03:41.290 **** 2026-02-04 04:45:35.559261 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:35.559271 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:35.559286 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:35.559305 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.559332 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.559353 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.559371 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:35.559389 | orchestrator | 2026-02-04 04:45:35.559407 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-04 04:45:35.559426 | orchestrator | Wednesday 04 February 2026 04:45:20 +0000 (0:00:01.947) 0:03:43.238 **** 2026-02-04 04:45:35.559443 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:35.559459 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:35.559477 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:35.559495 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:35.559512 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 04:45:35.559563 | orchestrator | 2026-02-04 04:45:35.559582 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-04 04:45:35.559603 | orchestrator | Wednesday 04 February 2026 04:45:22 +0000 (0:00:02.892) 0:03:46.130 **** 2026-02-04 04:45:35.559622 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:45:35.559642 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:45:35.559661 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:45:35.559679 | orchestrator | 2026-02-04 04:45:35.559699 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-04 04:45:35.559717 | orchestrator | Wednesday 04 February 2026 04:45:24 +0000 (0:00:01.418) 0:03:47.549 **** 2026-02-04 04:45:35.559737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 04:45:35.559758 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 04:45:35.559777 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.559788 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 04:45:35.559799 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 04:45:35.559810 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.559838 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 04:45:35.559849 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 04:45:35.559860 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.559871 | orchestrator | 2026-02-04 04:45:35.559882 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-04 04:45:35.559892 | orchestrator | Wednesday 04 February 2026 04:45:25 +0000 (0:00:01.509) 0:03:49.058 **** 2026-02-04 04:45:35.559905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:35.559945 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:35.559957 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.559991 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:35.560003 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:35.560014 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.560025 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:35.560048 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:35.560059 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.560070 | orchestrator | 2026-02-04 04:45:35.560081 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-04 04:45:35.560092 | orchestrator | Wednesday 04 February 2026 04:45:27 +0000 (0:00:01.759) 0:03:50.818 **** 2026-02-04 04:45:35.560103 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.560113 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.560124 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.560135 | orchestrator | 2026-02-04 04:45:35.560146 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-04 04:45:35.560157 | orchestrator | Wednesday 04 February 2026 04:45:29 +0000 (0:00:01.407) 0:03:52.226 **** 2026-02-04 04:45:35.560168 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.560179 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.560189 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.560200 | orchestrator | 2026-02-04 04:45:35.560211 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-04 04:45:35.560222 | orchestrator | Wednesday 04 February 2026 04:45:30 +0000 (0:00:01.477) 0:03:53.703 **** 2026-02-04 04:45:35.560233 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.560243 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.560254 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.560265 | orchestrator | 2026-02-04 04:45:35.560276 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-04 04:45:35.560287 | orchestrator | Wednesday 04 February 2026 04:45:31 +0000 (0:00:01.369) 0:03:55.073 **** 2026-02-04 04:45:35.560298 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:35.560309 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:35.560319 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:35.560330 | orchestrator | 2026-02-04 04:45:35.560341 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-04 04:45:35.560352 | orchestrator | Wednesday 04 February 2026 04:45:33 +0000 (0:00:01.340) 0:03:56.414 **** 2026-02-04 04:45:35.560363 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}) 2026-02-04 04:45:35.560380 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}) 2026-02-04 04:45:35.560392 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}) 2026-02-04 04:45:35.560402 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}) 2026-02-04 04:45:35.560414 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}) 2026-02-04 04:45:35.560424 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}) 2026-02-04 04:45:35.560436 | orchestrator | 2026-02-04 04:45:35.560447 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-04 04:45:35.560459 | orchestrator | Wednesday 04 February 2026 04:45:35 +0000 (0:00:02.155) 0:03:58.569 **** 2026-02-04 04:45:35.560491 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-33635451-34dd-546b-bd98-6f515d7d790f/osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1770172649.3678493, 'mtime': 1770172649.3638492, 'ctime': 1770172649.3638492, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-33635451-34dd-546b-bd98-6f515d7d790f/osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:38.512513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e/osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1770172669.5041528, 'mtime': 1770172669.4991527, 'ctime': 1770172669.4991527, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e/osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:38.512629 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:38.512666 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c/osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1770172647.3786397, 'mtime': 1770172647.3746395, 'ctime': 1770172647.3746395, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c/osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:38.512704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8a64378d-205e-5817-b815-b641dc764843/osd-block-8a64378d-205e-5817-b815-b641dc764843', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1770172667.408937, 'mtime': 1770172667.4049368, 'ctime': 1770172667.4049368, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8a64378d-205e-5817-b815-b641dc764843/osd-block-8a64378d-205e-5817-b815-b641dc764843', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:38.512717 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:38.512748 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af/osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1770172646.8478582, 'mtime': 1770172646.841858, 'ctime': 1770172646.841858, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af/osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:38.512767 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-43734a2f-bb9f-5443-b704-3f4971f68639/osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1770172664.7721288, 'mtime': 1770172664.7651286, 'ctime': 1770172664.7651286, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-43734a2f-bb9f-5443-b704-3f4971f68639/osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:38.512789 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:38.512800 | orchestrator | 2026-02-04 04:45:38.512813 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-04 04:45:38.512825 | orchestrator | Wednesday 04 February 2026 04:45:36 +0000 (0:00:01.552) 0:04:00.121 **** 2026-02-04 04:45:38.512837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 04:45:38.512850 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 04:45:38.512861 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:38.512873 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 04:45:38.512884 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 04:45:38.512895 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:38.512906 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 04:45:38.512967 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 04:45:38.512978 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:38.512990 | orchestrator | 2026-02-04 04:45:38.513001 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-04 04:45:38.513014 | orchestrator | Wednesday 04 February 2026 04:45:38 +0000 (0:00:01.461) 0:04:01.583 **** 2026-02-04 04:45:38.513035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158334 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158449 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:49.158465 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158478 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158489 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:49.158500 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158527 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158561 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:49.158573 | orchestrator | 2026-02-04 04:45:49.158585 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-04 04:45:49.158597 | orchestrator | Wednesday 04 February 2026 04:45:39 +0000 (0:00:01.417) 0:04:03.001 **** 2026-02-04 04:45:49.158609 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 04:45:49.158620 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 04:45:49.158631 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:49.158641 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 04:45:49.158652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 04:45:49.158661 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:49.158671 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 04:45:49.158680 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 04:45:49.158689 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:49.158699 | orchestrator | 2026-02-04 04:45:49.158710 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-04 04:45:49.158722 | orchestrator | Wednesday 04 February 2026 04:45:41 +0000 (0:00:01.713) 0:04:04.715 **** 2026-02-04 04:45:49.158733 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158756 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:49.158783 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158794 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158805 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:49.158815 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158826 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}, 'ansible_loop_var': 'item'})  2026-02-04 04:45:49.158844 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:49.158855 | orchestrator | 2026-02-04 04:45:49.158865 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-04 04:45:49.158876 | orchestrator | Wednesday 04 February 2026 04:45:43 +0000 (0:00:01.523) 0:04:06.238 **** 2026-02-04 04:45:49.158887 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:49.158897 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:49.158907 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:49.158946 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:49.158959 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:49.158968 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:45:49.158976 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:49.158982 | orchestrator | 2026-02-04 04:45:49.158996 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-04 04:45:49.159003 | orchestrator | Wednesday 04 February 2026 04:45:44 +0000 (0:00:01.938) 0:04:08.177 **** 2026-02-04 04:45:49.159011 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:45:49.159018 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:45:49.159025 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:45:49.159032 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:45:49.159040 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 04:45:49.159048 | orchestrator | 2026-02-04 04:45:49.159055 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-04 04:45:49.159062 | orchestrator | Wednesday 04 February 2026 04:45:47 +0000 (0:00:02.663) 0:04:10.841 **** 2026-02-04 04:45:49.159069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159108 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:45:49.159115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159151 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:45:49.159158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:45:49.159193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064204 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.064222 | orchestrator | 2026-02-04 04:46:07.064235 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-04 04:46:07.064248 | orchestrator | Wednesday 04 February 2026 04:45:49 +0000 (0:00:01.491) 0:04:12.332 **** 2026-02-04 04:46:07.064259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064315 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.064326 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064399 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.064410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064443 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064454 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064465 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.064476 | orchestrator | 2026-02-04 04:46:07.064488 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-04 04:46:07.064499 | orchestrator | Wednesday 04 February 2026 04:45:50 +0000 (0:00:01.780) 0:04:14.113 **** 2026-02-04 04:46:07.064510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064569 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064595 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.064609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064693 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.064707 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064762 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 04:46:07.064776 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.064789 | orchestrator | 2026-02-04 04:46:07.064803 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-04 04:46:07.064817 | orchestrator | Wednesday 04 February 2026 04:45:52 +0000 (0:00:01.474) 0:04:15.587 **** 2026-02-04 04:46:07.064831 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:07.064844 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:07.064858 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:07.064869 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.064880 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.064891 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.064901 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:07.064912 | orchestrator | 2026-02-04 04:46:07.064968 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-04 04:46:07.064982 | orchestrator | Wednesday 04 February 2026 04:45:54 +0000 (0:00:01.956) 0:04:17.543 **** 2026-02-04 04:46:07.064993 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:07.065004 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:07.065015 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:07.065026 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.065036 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.065053 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.065065 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:07.065075 | orchestrator | 2026-02-04 04:46:07.065086 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-04 04:46:07.065097 | orchestrator | Wednesday 04 February 2026 04:45:56 +0000 (0:00:02.260) 0:04:19.804 **** 2026-02-04 04:46:07.065108 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:07.065128 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:07.065139 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:07.065150 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.065162 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.065173 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.065183 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:07.065194 | orchestrator | 2026-02-04 04:46:07.065205 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-04 04:46:07.065217 | orchestrator | Wednesday 04 February 2026 04:45:58 +0000 (0:00:02.069) 0:04:21.873 **** 2026-02-04 04:46:07.065228 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:07.065239 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:07.065249 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:07.065260 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.065271 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.065281 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.065292 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:07.065303 | orchestrator | 2026-02-04 04:46:07.065314 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-04 04:46:07.065325 | orchestrator | Wednesday 04 February 2026 04:46:00 +0000 (0:00:01.970) 0:04:23.844 **** 2026-02-04 04:46:07.065336 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:07.065347 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:07.065358 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:07.065368 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.065379 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.065390 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.065401 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:07.065411 | orchestrator | 2026-02-04 04:46:07.065422 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-04 04:46:07.065433 | orchestrator | Wednesday 04 February 2026 04:46:02 +0000 (0:00:02.121) 0:04:25.965 **** 2026-02-04 04:46:07.065444 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:07.065454 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:07.065465 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:07.065476 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.065487 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.065498 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.065508 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:07.065519 | orchestrator | 2026-02-04 04:46:07.065530 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-04 04:46:07.065540 | orchestrator | Wednesday 04 February 2026 04:46:04 +0000 (0:00:01.902) 0:04:27.867 **** 2026-02-04 04:46:07.065551 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:07.065562 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:07.065573 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:07.065583 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:07.065594 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:07.065605 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:07.065615 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:07.065626 | orchestrator | 2026-02-04 04:46:07.065637 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-04 04:46:07.065647 | orchestrator | Wednesday 04 February 2026 04:46:06 +0000 (0:00:02.236) 0:04:30.104 **** 2026-02-04 04:46:07.065666 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:09.976755 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:09.976897 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:09.977013 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:09.977030 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:09.977044 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:09.977057 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:09.977069 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:09.977080 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:09.977108 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:09.977119 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:09.977130 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:09.977141 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:09.977152 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:09.977163 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:09.977174 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:09.977185 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:09.977196 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:09.977207 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:09.977218 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:09.977228 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:09.977239 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:09.977250 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:09.977263 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:09.977284 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:09.977317 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:09.977330 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:09.977344 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:09.977357 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:09.977371 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:09.977383 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:09.977396 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:09.977409 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:09.977422 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:09.977440 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:09.977454 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:09.977467 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:09.977481 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:09.977494 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:09.977507 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:09.977523 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:09.977542 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:09.977571 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:09.977594 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:09.977625 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:09.977643 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:09.977662 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:09.977680 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:09.977701 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:09.977721 | orchestrator | 2026-02-04 04:46:09.977742 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-04 04:46:09.977761 | orchestrator | Wednesday 04 February 2026 04:46:09 +0000 (0:00:02.213) 0:04:32.318 **** 2026-02-04 04:46:09.977783 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:09.977804 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:09.977823 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:09.977850 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:14.048193 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:14.048299 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:14.048315 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:14.048327 | orchestrator | 2026-02-04 04:46:14.048340 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-04 04:46:14.048353 | orchestrator | Wednesday 04 February 2026 04:46:11 +0000 (0:00:02.232) 0:04:34.551 **** 2026-02-04 04:46:14.048364 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:14.048377 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:14.048390 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:14.048403 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:14.048414 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:14.048428 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:14.048439 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:14.048467 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:14.048479 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:14.048490 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:14.048501 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:14.048512 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:14.048542 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:14.048554 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:14.048565 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:14.048575 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:14.048586 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:14.048597 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:14.048608 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:14.048619 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:14.048630 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:14.048641 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:14.048652 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:14.048679 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:14.048691 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:14.048702 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:14.048713 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:14.048724 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:14.048736 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:14.048750 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:14.048763 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:14.048782 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:14.048803 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:14.048817 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:14.048830 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:14.048843 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:14.048857 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 04:46:14.048870 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:14.048883 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:14.048896 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:14.048909 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:14.048922 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:14.049014 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:14.049028 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 04:46:14.049039 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 04:46:14.049050 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 04:46:14.049061 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 04:46:14.049081 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 04:46:56.369599 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:56.369719 | orchestrator | 2026-02-04 04:46:56.369737 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-04 04:46:56.369751 | orchestrator | Wednesday 04 February 2026 04:46:14 +0000 (0:00:02.672) 0:04:37.224 **** 2026-02-04 04:46:56.369762 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:56.369779 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:56.369799 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:56.369819 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:56.369837 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:56.369856 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:56.369874 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:56.369893 | orchestrator | 2026-02-04 04:46:56.369913 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-04 04:46:56.369989 | orchestrator | Wednesday 04 February 2026 04:46:16 +0000 (0:00:02.295) 0:04:39.520 **** 2026-02-04 04:46:56.370010 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:56.370145 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:56.370169 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:56.370190 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:56.370210 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:56.370231 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:56.370252 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:56.370273 | orchestrator | 2026-02-04 04:46:56.370295 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-04 04:46:56.370315 | orchestrator | Wednesday 04 February 2026 04:46:18 +0000 (0:00:02.224) 0:04:41.744 **** 2026-02-04 04:46:56.370336 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:56.370358 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:56.370378 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:56.370399 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:56.370419 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:56.370439 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:56.370460 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:56.370479 | orchestrator | 2026-02-04 04:46:56.370595 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-04 04:46:56.370672 | orchestrator | Wednesday 04 February 2026 04:46:20 +0000 (0:00:02.252) 0:04:43.997 **** 2026-02-04 04:46:56.370694 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-04 04:46:56.370717 | orchestrator | 2026-02-04 04:46:56.370737 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-04 04:46:56.370754 | orchestrator | Wednesday 04 February 2026 04:46:23 +0000 (0:00:02.675) 0:04:46.673 **** 2026-02-04 04:46:56.370774 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 04:46:56.370795 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 04:46:56.370815 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 04:46:56.370835 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 04:46:56.370854 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 04:46:56.370874 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 04:46:56.370894 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 04:46:56.370914 | orchestrator | 2026-02-04 04:46:56.371000 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-04 04:46:56.371023 | orchestrator | Wednesday 04 February 2026 04:46:25 +0000 (0:00:02.438) 0:04:49.111 **** 2026-02-04 04:46:56.371042 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:56.371059 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:56.371077 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:56.371095 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:56.371113 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:56.371130 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:56.371145 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:56.371162 | orchestrator | 2026-02-04 04:46:56.371179 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-04 04:46:56.371196 | orchestrator | Wednesday 04 February 2026 04:46:28 +0000 (0:00:02.629) 0:04:51.741 **** 2026-02-04 04:46:56.371212 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:56.371229 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:56.371245 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:56.371261 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:56.371278 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:56.371294 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:56.371328 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:56.371367 | orchestrator | 2026-02-04 04:46:56.371398 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-04 04:46:56.371415 | orchestrator | Wednesday 04 February 2026 04:46:30 +0000 (0:00:02.319) 0:04:54.060 **** 2026-02-04 04:46:56.371431 | orchestrator | ok: [testbed-node-1] 2026-02-04 04:46:56.371449 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:46:56.371466 | orchestrator | ok: [testbed-node-2] 2026-02-04 04:46:56.371483 | orchestrator | ok: [testbed-node-3] 2026-02-04 04:46:56.371500 | orchestrator | ok: [testbed-node-4] 2026-02-04 04:46:56.371518 | orchestrator | ok: [testbed-node-5] 2026-02-04 04:46:56.371536 | orchestrator | ok: [testbed-manager] 2026-02-04 04:46:56.371554 | orchestrator | 2026-02-04 04:46:56.371572 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-04 04:46:56.371590 | orchestrator | Wednesday 04 February 2026 04:46:33 +0000 (0:00:02.449) 0:04:56.510 **** 2026-02-04 04:46:56.371609 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:56.371627 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:56.371647 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:56.371666 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:56.371712 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:56.371730 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:56.371749 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:56.371768 | orchestrator | 2026-02-04 04:46:56.371786 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-04 04:46:56.371806 | orchestrator | Wednesday 04 February 2026 04:46:35 +0000 (0:00:02.315) 0:04:58.826 **** 2026-02-04 04:46:56.371824 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:56.371841 | orchestrator | skipping: [testbed-node-1] 2026-02-04 04:46:56.371852 | orchestrator | skipping: [testbed-node-2] 2026-02-04 04:46:56.371862 | orchestrator | skipping: [testbed-node-3] 2026-02-04 04:46:56.371873 | orchestrator | skipping: [testbed-node-4] 2026-02-04 04:46:56.371884 | orchestrator | skipping: [testbed-node-5] 2026-02-04 04:46:56.371894 | orchestrator | skipping: [testbed-manager] 2026-02-04 04:46:56.371905 | orchestrator | 2026-02-04 04:46:56.371916 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-04 04:46:56.371927 | orchestrator | Wednesday 04 February 2026 04:46:38 +0000 (0:00:02.511) 0:05:01.337 **** 2026-02-04 04:46:56.371975 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:46:56.371994 | orchestrator | 2026-02-04 04:46:56.372013 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-04 04:46:56.372031 | orchestrator | Wednesday 04 February 2026 04:46:40 +0000 (0:00:02.746) 0:05:04.083 **** 2026-02-04 04:46:56.372065 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:46:56.372076 | orchestrator | 2026-02-04 04:46:56.372087 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-04 04:46:56.372098 | orchestrator | 2026-02-04 04:46:56.372109 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 04:46:56.372120 | orchestrator | Wednesday 04 February 2026 04:46:42 +0000 (0:00:02.060) 0:05:06.144 **** 2026-02-04 04:46:56.372130 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:46:56.372141 | orchestrator | 2026-02-04 04:46:56.372152 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 04:46:56.372174 | orchestrator | Wednesday 04 February 2026 04:46:44 +0000 (0:00:01.431) 0:05:07.576 **** 2026-02-04 04:46:56.372185 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:46:56.372196 | orchestrator | 2026-02-04 04:46:56.372207 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-04 04:46:56.372218 | orchestrator | Wednesday 04 February 2026 04:46:45 +0000 (0:00:01.141) 0:05:08.717 **** 2026-02-04 04:46:56.372231 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-04 04:46:56.372257 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-04 04:46:56.372268 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-04 04:46:56.372280 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-04 04:46:56.372293 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-04 04:46:56.372305 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}])  2026-02-04 04:46:56.372318 | orchestrator | 2026-02-04 04:46:56.372329 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-04 04:46:56.372340 | orchestrator | 2026-02-04 04:46:56.372351 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-04 04:46:56.372373 | orchestrator | Wednesday 04 February 2026 04:46:56 +0000 (0:00:10.831) 0:05:19.548 **** 2026-02-04 04:47:24.211506 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.211623 | orchestrator | 2026-02-04 04:47:24.211639 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-04 04:47:24.211652 | orchestrator | Wednesday 04 February 2026 04:46:57 +0000 (0:00:01.525) 0:05:21.074 **** 2026-02-04 04:47:24.211664 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.211675 | orchestrator | 2026-02-04 04:47:24.211687 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-04 04:47:24.211698 | orchestrator | Wednesday 04 February 2026 04:46:59 +0000 (0:00:01.190) 0:05:22.265 **** 2026-02-04 04:47:24.211709 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:24.211720 | orchestrator | 2026-02-04 04:47:24.211731 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-04 04:47:24.211742 | orchestrator | Wednesday 04 February 2026 04:47:00 +0000 (0:00:01.095) 0:05:23.360 **** 2026-02-04 04:47:24.211753 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.211764 | orchestrator | 2026-02-04 04:47:24.211775 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 04:47:24.211786 | orchestrator | Wednesday 04 February 2026 04:47:01 +0000 (0:00:01.135) 0:05:24.496 **** 2026-02-04 04:47:24.211797 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-04 04:47:24.211808 | orchestrator | 2026-02-04 04:47:24.211819 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 04:47:24.211856 | orchestrator | Wednesday 04 February 2026 04:47:02 +0000 (0:00:01.116) 0:05:25.613 **** 2026-02-04 04:47:24.211868 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.211879 | orchestrator | 2026-02-04 04:47:24.211890 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 04:47:24.211901 | orchestrator | Wednesday 04 February 2026 04:47:03 +0000 (0:00:01.475) 0:05:27.089 **** 2026-02-04 04:47:24.211911 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.211922 | orchestrator | 2026-02-04 04:47:24.212004 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 04:47:24.212018 | orchestrator | Wednesday 04 February 2026 04:47:05 +0000 (0:00:01.129) 0:05:28.219 **** 2026-02-04 04:47:24.212030 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.212041 | orchestrator | 2026-02-04 04:47:24.212055 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 04:47:24.212068 | orchestrator | Wednesday 04 February 2026 04:47:06 +0000 (0:00:01.461) 0:05:29.681 **** 2026-02-04 04:47:24.212081 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.212093 | orchestrator | 2026-02-04 04:47:24.212106 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 04:47:24.212119 | orchestrator | Wednesday 04 February 2026 04:47:07 +0000 (0:00:01.159) 0:05:30.841 **** 2026-02-04 04:47:24.212132 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.212145 | orchestrator | 2026-02-04 04:47:24.212156 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 04:47:24.212167 | orchestrator | Wednesday 04 February 2026 04:47:08 +0000 (0:00:01.117) 0:05:31.959 **** 2026-02-04 04:47:24.212178 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.212189 | orchestrator | 2026-02-04 04:47:24.212200 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 04:47:24.212211 | orchestrator | Wednesday 04 February 2026 04:47:09 +0000 (0:00:01.171) 0:05:33.130 **** 2026-02-04 04:47:24.212221 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:24.212232 | orchestrator | 2026-02-04 04:47:24.212243 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 04:47:24.212254 | orchestrator | Wednesday 04 February 2026 04:47:11 +0000 (0:00:01.137) 0:05:34.268 **** 2026-02-04 04:47:24.212265 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.212275 | orchestrator | 2026-02-04 04:47:24.212286 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 04:47:24.212297 | orchestrator | Wednesday 04 February 2026 04:47:12 +0000 (0:00:01.114) 0:05:35.382 **** 2026-02-04 04:47:24.212308 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:47:24.212320 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:47:24.212331 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:47:24.212341 | orchestrator | 2026-02-04 04:47:24.212352 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 04:47:24.212363 | orchestrator | Wednesday 04 February 2026 04:47:13 +0000 (0:00:01.667) 0:05:37.050 **** 2026-02-04 04:47:24.212373 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:24.212384 | orchestrator | 2026-02-04 04:47:24.212395 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 04:47:24.212406 | orchestrator | Wednesday 04 February 2026 04:47:15 +0000 (0:00:01.288) 0:05:38.339 **** 2026-02-04 04:47:24.212416 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:47:24.212427 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:47:24.212438 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:47:24.212449 | orchestrator | 2026-02-04 04:47:24.212459 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 04:47:24.212470 | orchestrator | Wednesday 04 February 2026 04:47:18 +0000 (0:00:03.278) 0:05:41.618 **** 2026-02-04 04:47:24.212489 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 04:47:24.212501 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 04:47:24.212511 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 04:47:24.212522 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:24.212533 | orchestrator | 2026-02-04 04:47:24.212544 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 04:47:24.212555 | orchestrator | Wednesday 04 February 2026 04:47:19 +0000 (0:00:01.385) 0:05:43.004 **** 2026-02-04 04:47:24.212584 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 04:47:24.212600 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 04:47:24.212611 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 04:47:24.212622 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:24.212633 | orchestrator | 2026-02-04 04:47:24.212644 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 04:47:24.212655 | orchestrator | Wednesday 04 February 2026 04:47:21 +0000 (0:00:01.936) 0:05:44.940 **** 2026-02-04 04:47:24.212667 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:24.212687 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:24.212699 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:24.212710 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:24.212721 | orchestrator | 2026-02-04 04:47:24.212732 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 04:47:24.212742 | orchestrator | Wednesday 04 February 2026 04:47:22 +0000 (0:00:01.161) 0:05:46.101 **** 2026-02-04 04:47:24.212756 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'd8f725914c3c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 04:47:15.684421', 'end': '2026-02-04 04:47:15.726977', 'delta': '0:00:00.042556', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d8f725914c3c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-04 04:47:24.212777 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e8207b686900', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 04:47:16.241228', 'end': '2026-02-04 04:47:16.305860', 'delta': '0:00:00.064632', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8207b686900'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-04 04:47:24.212797 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c48be97cec44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 04:47:17.147472', 'end': '2026-02-04 04:47:17.201928', 'delta': '0:00:00.054456', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c48be97cec44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-04 04:47:43.113073 | orchestrator | 2026-02-04 04:47:43.113163 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 04:47:43.113174 | orchestrator | Wednesday 04 February 2026 04:47:24 +0000 (0:00:01.291) 0:05:47.392 **** 2026-02-04 04:47:43.113182 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:43.113189 | orchestrator | 2026-02-04 04:47:43.113196 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 04:47:43.113203 | orchestrator | Wednesday 04 February 2026 04:47:25 +0000 (0:00:01.250) 0:05:48.642 **** 2026-02-04 04:47:43.113209 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113216 | orchestrator | 2026-02-04 04:47:43.113222 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 04:47:43.113229 | orchestrator | Wednesday 04 February 2026 04:47:26 +0000 (0:00:01.200) 0:05:49.843 **** 2026-02-04 04:47:43.113235 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:43.113241 | orchestrator | 2026-02-04 04:47:43.113248 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 04:47:43.113254 | orchestrator | Wednesday 04 February 2026 04:47:27 +0000 (0:00:01.155) 0:05:50.999 **** 2026-02-04 04:47:43.113261 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-04 04:47:43.113267 | orchestrator | 2026-02-04 04:47:43.113286 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 04:47:43.113293 | orchestrator | Wednesday 04 February 2026 04:47:30 +0000 (0:00:02.354) 0:05:53.353 **** 2026-02-04 04:47:43.113299 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:47:43.113305 | orchestrator | 2026-02-04 04:47:43.113311 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 04:47:43.113317 | orchestrator | Wednesday 04 February 2026 04:47:31 +0000 (0:00:01.150) 0:05:54.504 **** 2026-02-04 04:47:43.113324 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113330 | orchestrator | 2026-02-04 04:47:43.113336 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 04:47:43.113342 | orchestrator | Wednesday 04 February 2026 04:47:32 +0000 (0:00:01.166) 0:05:55.671 **** 2026-02-04 04:47:43.113348 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113354 | orchestrator | 2026-02-04 04:47:43.113361 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 04:47:43.113367 | orchestrator | Wednesday 04 February 2026 04:47:33 +0000 (0:00:01.250) 0:05:56.922 **** 2026-02-04 04:47:43.113390 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113397 | orchestrator | 2026-02-04 04:47:43.113403 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 04:47:43.113409 | orchestrator | Wednesday 04 February 2026 04:47:34 +0000 (0:00:01.118) 0:05:58.041 **** 2026-02-04 04:47:43.113415 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113421 | orchestrator | 2026-02-04 04:47:43.113427 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 04:47:43.113434 | orchestrator | Wednesday 04 February 2026 04:47:35 +0000 (0:00:01.146) 0:05:59.187 **** 2026-02-04 04:47:43.113440 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113446 | orchestrator | 2026-02-04 04:47:43.113452 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 04:47:43.113458 | orchestrator | Wednesday 04 February 2026 04:47:37 +0000 (0:00:01.181) 0:06:00.369 **** 2026-02-04 04:47:43.113464 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113470 | orchestrator | 2026-02-04 04:47:43.113477 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 04:47:43.113492 | orchestrator | Wednesday 04 February 2026 04:47:38 +0000 (0:00:01.172) 0:06:01.542 **** 2026-02-04 04:47:43.113498 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113504 | orchestrator | 2026-02-04 04:47:43.113510 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 04:47:43.113517 | orchestrator | Wednesday 04 February 2026 04:47:39 +0000 (0:00:01.123) 0:06:02.666 **** 2026-02-04 04:47:43.113523 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113529 | orchestrator | 2026-02-04 04:47:43.113535 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 04:47:43.113542 | orchestrator | Wednesday 04 February 2026 04:47:40 +0000 (0:00:01.191) 0:06:03.858 **** 2026-02-04 04:47:43.113548 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:43.113554 | orchestrator | 2026-02-04 04:47:43.113560 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 04:47:43.113567 | orchestrator | Wednesday 04 February 2026 04:47:41 +0000 (0:00:01.200) 0:06:05.058 **** 2026-02-04 04:47:43.113575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:47:43.113584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:47:43.113605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:47:43.113615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 04:47:43.113634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:47:43.113649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:47:43.113656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:47:43.113673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5c0a15c2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 04:47:44.379550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:47:44.379688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 04:47:44.379706 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:47:44.379720 | orchestrator | 2026-02-04 04:47:44.379732 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 04:47:44.379744 | orchestrator | Wednesday 04 February 2026 04:47:43 +0000 (0:00:01.237) 0:06:06.296 **** 2026-02-04 04:47:44.379757 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:44.379771 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:44.379783 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:44.379797 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:44.379828 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:44.379853 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:44.379865 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:44.379879 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5c0a15c2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:47:44.379901 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:48:39.699227 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 04:48:39.699378 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.699408 | orchestrator | 2026-02-04 04:48:39.699429 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 04:48:39.699442 | orchestrator | Wednesday 04 February 2026 04:47:44 +0000 (0:00:01.269) 0:06:07.566 **** 2026-02-04 04:48:39.699453 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:48:39.699466 | orchestrator | 2026-02-04 04:48:39.699477 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 04:48:39.699488 | orchestrator | Wednesday 04 February 2026 04:47:45 +0000 (0:00:01.536) 0:06:09.102 **** 2026-02-04 04:48:39.699499 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:48:39.699510 | orchestrator | 2026-02-04 04:48:39.699521 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 04:48:39.699532 | orchestrator | Wednesday 04 February 2026 04:47:47 +0000 (0:00:01.135) 0:06:10.237 **** 2026-02-04 04:48:39.699543 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:48:39.699554 | orchestrator | 2026-02-04 04:48:39.699565 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 04:48:39.699576 | orchestrator | Wednesday 04 February 2026 04:47:48 +0000 (0:00:01.492) 0:06:11.730 **** 2026-02-04 04:48:39.699587 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.699598 | orchestrator | 2026-02-04 04:48:39.699610 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 04:48:39.699620 | orchestrator | Wednesday 04 February 2026 04:47:49 +0000 (0:00:01.132) 0:06:12.863 **** 2026-02-04 04:48:39.699631 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.699642 | orchestrator | 2026-02-04 04:48:39.699653 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 04:48:39.699664 | orchestrator | Wednesday 04 February 2026 04:47:50 +0000 (0:00:01.240) 0:06:14.104 **** 2026-02-04 04:48:39.699675 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.699686 | orchestrator | 2026-02-04 04:48:39.699697 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 04:48:39.699708 | orchestrator | Wednesday 04 February 2026 04:47:52 +0000 (0:00:01.148) 0:06:15.252 **** 2026-02-04 04:48:39.699719 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:48:39.699730 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 04:48:39.699741 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 04:48:39.699752 | orchestrator | 2026-02-04 04:48:39.699762 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 04:48:39.699773 | orchestrator | Wednesday 04 February 2026 04:47:54 +0000 (0:00:01.991) 0:06:17.244 **** 2026-02-04 04:48:39.699784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 04:48:39.699796 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 04:48:39.699807 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 04:48:39.699818 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.699828 | orchestrator | 2026-02-04 04:48:39.699839 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 04:48:39.699875 | orchestrator | Wednesday 04 February 2026 04:47:55 +0000 (0:00:01.186) 0:06:18.430 **** 2026-02-04 04:48:39.699886 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.699897 | orchestrator | 2026-02-04 04:48:39.699908 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 04:48:39.699919 | orchestrator | Wednesday 04 February 2026 04:47:56 +0000 (0:00:01.174) 0:06:19.605 **** 2026-02-04 04:48:39.699930 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:48:39.699942 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:48:39.699998 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:48:39.700022 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 04:48:39.700041 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 04:48:39.700057 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 04:48:39.700069 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 04:48:39.700079 | orchestrator | 2026-02-04 04:48:39.700090 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 04:48:39.700101 | orchestrator | Wednesday 04 February 2026 04:47:58 +0000 (0:00:02.133) 0:06:21.739 **** 2026-02-04 04:48:39.700111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:48:39.700122 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:48:39.700133 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:48:39.700143 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 04:48:39.700174 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 04:48:39.700186 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 04:48:39.700197 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 04:48:39.700207 | orchestrator | 2026-02-04 04:48:39.700218 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-04 04:48:39.700228 | orchestrator | Wednesday 04 February 2026 04:48:01 +0000 (0:00:02.890) 0:06:24.629 **** 2026-02-04 04:48:39.700239 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-04 04:48:39.700250 | orchestrator | 2026-02-04 04:48:39.700270 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-04 04:48:39.700281 | orchestrator | Wednesday 04 February 2026 04:48:03 +0000 (0:00:02.337) 0:06:26.967 **** 2026-02-04 04:48:39.700292 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.700302 | orchestrator | 2026-02-04 04:48:39.700313 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-04 04:48:39.700324 | orchestrator | Wednesday 04 February 2026 04:48:05 +0000 (0:00:01.330) 0:06:28.298 **** 2026-02-04 04:48:39.700335 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.700345 | orchestrator | 2026-02-04 04:48:39.700356 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-04 04:48:39.700367 | orchestrator | Wednesday 04 February 2026 04:48:06 +0000 (0:00:01.172) 0:06:29.471 **** 2026-02-04 04:48:39.700378 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-04 04:48:39.700388 | orchestrator | 2026-02-04 04:48:39.700399 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-04 04:48:39.700410 | orchestrator | Wednesday 04 February 2026 04:48:08 +0000 (0:00:02.259) 0:06:31.730 **** 2026-02-04 04:48:39.700421 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.700431 | orchestrator | 2026-02-04 04:48:39.700442 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-04 04:48:39.700462 | orchestrator | Wednesday 04 February 2026 04:48:09 +0000 (0:00:01.172) 0:06:32.902 **** 2026-02-04 04:48:39.700473 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:48:39.700483 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 04:48:39.700494 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 04:48:39.700505 | orchestrator | 2026-02-04 04:48:39.700516 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-04 04:48:39.700526 | orchestrator | Wednesday 04 February 2026 04:48:12 +0000 (0:00:02.535) 0:06:35.438 **** 2026-02-04 04:48:39.700537 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-04 04:48:39.700548 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-04 04:48:39.700560 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-04 04:48:39.700571 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-04 04:48:39.700582 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-04 04:48:39.700593 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-04 04:48:39.700603 | orchestrator | 2026-02-04 04:48:39.700614 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-04 04:48:39.700625 | orchestrator | Wednesday 04 February 2026 04:48:25 +0000 (0:00:13.370) 0:06:48.809 **** 2026-02-04 04:48:39.700636 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:48:39.700647 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 04:48:39.700658 | orchestrator | 2026-02-04 04:48:39.700669 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-04 04:48:39.700679 | orchestrator | Wednesday 04 February 2026 04:48:29 +0000 (0:00:04.076) 0:06:52.885 **** 2026-02-04 04:48:39.700690 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:48:39.700701 | orchestrator | 2026-02-04 04:48:39.700712 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 04:48:39.700723 | orchestrator | Wednesday 04 February 2026 04:48:32 +0000 (0:00:02.529) 0:06:55.415 **** 2026-02-04 04:48:39.700733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-04 04:48:39.700744 | orchestrator | 2026-02-04 04:48:39.700755 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 04:48:39.700766 | orchestrator | Wednesday 04 February 2026 04:48:33 +0000 (0:00:01.441) 0:06:56.856 **** 2026-02-04 04:48:39.700776 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-04 04:48:39.700787 | orchestrator | 2026-02-04 04:48:39.700798 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 04:48:39.700809 | orchestrator | Wednesday 04 February 2026 04:48:35 +0000 (0:00:01.604) 0:06:58.461 **** 2026-02-04 04:48:39.700819 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:48:39.700830 | orchestrator | 2026-02-04 04:48:39.700841 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 04:48:39.700852 | orchestrator | Wednesday 04 February 2026 04:48:36 +0000 (0:00:01.582) 0:07:00.044 **** 2026-02-04 04:48:39.700862 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.700873 | orchestrator | 2026-02-04 04:48:39.700884 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 04:48:39.700894 | orchestrator | Wednesday 04 February 2026 04:48:38 +0000 (0:00:01.183) 0:07:01.227 **** 2026-02-04 04:48:39.700905 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:48:39.700916 | orchestrator | 2026-02-04 04:48:39.700933 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 04:49:32.056583 | orchestrator | Wednesday 04 February 2026 04:48:39 +0000 (0:00:01.654) 0:07:02.882 **** 2026-02-04 04:49:32.056700 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.056719 | orchestrator | 2026-02-04 04:49:32.056732 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 04:49:32.056743 | orchestrator | Wednesday 04 February 2026 04:48:40 +0000 (0:00:01.199) 0:07:04.081 **** 2026-02-04 04:49:32.056754 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.056766 | orchestrator | 2026-02-04 04:49:32.056777 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 04:49:32.056804 | orchestrator | Wednesday 04 February 2026 04:48:42 +0000 (0:00:01.616) 0:07:05.698 **** 2026-02-04 04:49:32.056816 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.056826 | orchestrator | 2026-02-04 04:49:32.056837 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 04:49:32.056848 | orchestrator | Wednesday 04 February 2026 04:48:43 +0000 (0:00:01.194) 0:07:06.892 **** 2026-02-04 04:49:32.056859 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.056870 | orchestrator | 2026-02-04 04:49:32.056881 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 04:49:32.056892 | orchestrator | Wednesday 04 February 2026 04:48:44 +0000 (0:00:01.140) 0:07:08.033 **** 2026-02-04 04:49:32.056903 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.056913 | orchestrator | 2026-02-04 04:49:32.056925 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 04:49:32.056936 | orchestrator | Wednesday 04 February 2026 04:48:46 +0000 (0:00:01.644) 0:07:09.678 **** 2026-02-04 04:49:32.056946 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.056957 | orchestrator | 2026-02-04 04:49:32.056968 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 04:49:32.057031 | orchestrator | Wednesday 04 February 2026 04:48:48 +0000 (0:00:01.558) 0:07:11.236 **** 2026-02-04 04:49:32.057043 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057054 | orchestrator | 2026-02-04 04:49:32.057065 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 04:49:32.057076 | orchestrator | Wednesday 04 February 2026 04:48:49 +0000 (0:00:01.169) 0:07:12.406 **** 2026-02-04 04:49:32.057087 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.057098 | orchestrator | 2026-02-04 04:49:32.057112 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 04:49:32.057126 | orchestrator | Wednesday 04 February 2026 04:48:50 +0000 (0:00:01.153) 0:07:13.560 **** 2026-02-04 04:49:32.057139 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057152 | orchestrator | 2026-02-04 04:49:32.057165 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 04:49:32.057177 | orchestrator | Wednesday 04 February 2026 04:48:51 +0000 (0:00:01.160) 0:07:14.721 **** 2026-02-04 04:49:32.057191 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057203 | orchestrator | 2026-02-04 04:49:32.057217 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 04:49:32.057230 | orchestrator | Wednesday 04 February 2026 04:48:52 +0000 (0:00:01.121) 0:07:15.842 **** 2026-02-04 04:49:32.057243 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057256 | orchestrator | 2026-02-04 04:49:32.057269 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 04:49:32.057281 | orchestrator | Wednesday 04 February 2026 04:48:53 +0000 (0:00:01.138) 0:07:16.981 **** 2026-02-04 04:49:32.057295 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057307 | orchestrator | 2026-02-04 04:49:32.057320 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 04:49:32.057333 | orchestrator | Wednesday 04 February 2026 04:48:54 +0000 (0:00:01.154) 0:07:18.135 **** 2026-02-04 04:49:32.057346 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057360 | orchestrator | 2026-02-04 04:49:32.057373 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 04:49:32.057407 | orchestrator | Wednesday 04 February 2026 04:48:56 +0000 (0:00:01.165) 0:07:19.300 **** 2026-02-04 04:49:32.057419 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.057430 | orchestrator | 2026-02-04 04:49:32.057441 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 04:49:32.057452 | orchestrator | Wednesday 04 February 2026 04:48:57 +0000 (0:00:01.180) 0:07:20.481 **** 2026-02-04 04:49:32.057463 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.057473 | orchestrator | 2026-02-04 04:49:32.057484 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 04:49:32.057495 | orchestrator | Wednesday 04 February 2026 04:48:58 +0000 (0:00:01.154) 0:07:21.635 **** 2026-02-04 04:49:32.057506 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.057517 | orchestrator | 2026-02-04 04:49:32.057528 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-04 04:49:32.057538 | orchestrator | Wednesday 04 February 2026 04:48:59 +0000 (0:00:01.127) 0:07:22.763 **** 2026-02-04 04:49:32.057549 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057560 | orchestrator | 2026-02-04 04:49:32.057571 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-04 04:49:32.057582 | orchestrator | Wednesday 04 February 2026 04:49:00 +0000 (0:00:01.136) 0:07:23.900 **** 2026-02-04 04:49:32.057592 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057603 | orchestrator | 2026-02-04 04:49:32.057614 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-04 04:49:32.057624 | orchestrator | Wednesday 04 February 2026 04:49:01 +0000 (0:00:01.104) 0:07:25.004 **** 2026-02-04 04:49:32.057635 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057646 | orchestrator | 2026-02-04 04:49:32.057657 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-04 04:49:32.057668 | orchestrator | Wednesday 04 February 2026 04:49:02 +0000 (0:00:01.150) 0:07:26.154 **** 2026-02-04 04:49:32.057678 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057689 | orchestrator | 2026-02-04 04:49:32.057700 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-04 04:49:32.057711 | orchestrator | Wednesday 04 February 2026 04:49:04 +0000 (0:00:01.119) 0:07:27.275 **** 2026-02-04 04:49:32.057740 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057751 | orchestrator | 2026-02-04 04:49:32.057762 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-04 04:49:32.057773 | orchestrator | Wednesday 04 February 2026 04:49:05 +0000 (0:00:01.185) 0:07:28.460 **** 2026-02-04 04:49:32.057784 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057795 | orchestrator | 2026-02-04 04:49:32.057806 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-04 04:49:32.057817 | orchestrator | Wednesday 04 February 2026 04:49:06 +0000 (0:00:01.182) 0:07:29.643 **** 2026-02-04 04:49:32.057827 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057838 | orchestrator | 2026-02-04 04:49:32.057855 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-04 04:49:32.057867 | orchestrator | Wednesday 04 February 2026 04:49:07 +0000 (0:00:01.186) 0:07:30.829 **** 2026-02-04 04:49:32.057878 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057889 | orchestrator | 2026-02-04 04:49:32.057899 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-04 04:49:32.057910 | orchestrator | Wednesday 04 February 2026 04:49:08 +0000 (0:00:01.174) 0:07:32.004 **** 2026-02-04 04:49:32.057921 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.057932 | orchestrator | 2026-02-04 04:49:32.057942 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-04 04:49:32.057953 | orchestrator | Wednesday 04 February 2026 04:49:09 +0000 (0:00:01.126) 0:07:33.131 **** 2026-02-04 04:49:32.057964 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.058066 | orchestrator | 2026-02-04 04:49:32.058081 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-04 04:49:32.058104 | orchestrator | Wednesday 04 February 2026 04:49:11 +0000 (0:00:01.116) 0:07:34.247 **** 2026-02-04 04:49:32.058115 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.058134 | orchestrator | 2026-02-04 04:49:32.058164 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-04 04:49:32.058181 | orchestrator | Wednesday 04 February 2026 04:49:12 +0000 (0:00:01.120) 0:07:35.367 **** 2026-02-04 04:49:32.058192 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.058203 | orchestrator | 2026-02-04 04:49:32.058214 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-04 04:49:32.058225 | orchestrator | Wednesday 04 February 2026 04:49:13 +0000 (0:00:01.127) 0:07:36.495 **** 2026-02-04 04:49:32.058236 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.058246 | orchestrator | 2026-02-04 04:49:32.058257 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-04 04:49:32.058272 | orchestrator | Wednesday 04 February 2026 04:49:15 +0000 (0:00:02.038) 0:07:38.534 **** 2026-02-04 04:49:32.058283 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.058294 | orchestrator | 2026-02-04 04:49:32.058305 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-04 04:49:32.058315 | orchestrator | Wednesday 04 February 2026 04:49:17 +0000 (0:00:02.459) 0:07:40.993 **** 2026-02-04 04:49:32.058326 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-04 04:49:32.058339 | orchestrator | 2026-02-04 04:49:32.058350 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-04 04:49:32.058360 | orchestrator | Wednesday 04 February 2026 04:49:19 +0000 (0:00:01.480) 0:07:42.474 **** 2026-02-04 04:49:32.058371 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.058382 | orchestrator | 2026-02-04 04:49:32.058393 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-04 04:49:32.058403 | orchestrator | Wednesday 04 February 2026 04:49:20 +0000 (0:00:01.138) 0:07:43.613 **** 2026-02-04 04:49:32.058414 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.058425 | orchestrator | 2026-02-04 04:49:32.058436 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-04 04:49:32.058446 | orchestrator | Wednesday 04 February 2026 04:49:21 +0000 (0:00:01.156) 0:07:44.770 **** 2026-02-04 04:49:32.058457 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 04:49:32.058468 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 04:49:32.058479 | orchestrator | 2026-02-04 04:49:32.058490 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-04 04:49:32.058500 | orchestrator | Wednesday 04 February 2026 04:49:23 +0000 (0:00:01.860) 0:07:46.630 **** 2026-02-04 04:49:32.058511 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.058522 | orchestrator | 2026-02-04 04:49:32.058533 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-04 04:49:32.058544 | orchestrator | Wednesday 04 February 2026 04:49:25 +0000 (0:00:01.713) 0:07:48.343 **** 2026-02-04 04:49:32.058555 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.058565 | orchestrator | 2026-02-04 04:49:32.058576 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-04 04:49:32.058587 | orchestrator | Wednesday 04 February 2026 04:49:26 +0000 (0:00:01.202) 0:07:49.546 **** 2026-02-04 04:49:32.058597 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.058608 | orchestrator | 2026-02-04 04:49:32.058619 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-04 04:49:32.058630 | orchestrator | Wednesday 04 February 2026 04:49:27 +0000 (0:00:01.117) 0:07:50.663 **** 2026-02-04 04:49:32.058640 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:49:32.058651 | orchestrator | 2026-02-04 04:49:32.058662 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-04 04:49:32.058673 | orchestrator | Wednesday 04 February 2026 04:49:28 +0000 (0:00:01.151) 0:07:51.814 **** 2026-02-04 04:49:32.058691 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-04 04:49:32.058702 | orchestrator | 2026-02-04 04:49:32.058713 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-04 04:49:32.058723 | orchestrator | Wednesday 04 February 2026 04:49:30 +0000 (0:00:01.520) 0:07:53.335 **** 2026-02-04 04:49:32.058734 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:49:32.058745 | orchestrator | 2026-02-04 04:49:32.058765 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-04 04:50:19.602849 | orchestrator | Wednesday 04 February 2026 04:49:32 +0000 (0:00:01.904) 0:07:55.239 **** 2026-02-04 04:50:19.602966 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 04:50:19.602983 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 04:50:19.603076 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 04:50:19.603091 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603104 | orchestrator | 2026-02-04 04:50:19.603116 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-04 04:50:19.603127 | orchestrator | Wednesday 04 February 2026 04:49:33 +0000 (0:00:01.227) 0:07:56.466 **** 2026-02-04 04:50:19.603139 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603150 | orchestrator | 2026-02-04 04:50:19.603162 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-04 04:50:19.603173 | orchestrator | Wednesday 04 February 2026 04:49:34 +0000 (0:00:01.162) 0:07:57.629 **** 2026-02-04 04:50:19.603184 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603195 | orchestrator | 2026-02-04 04:50:19.603206 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-04 04:50:19.603217 | orchestrator | Wednesday 04 February 2026 04:49:35 +0000 (0:00:01.252) 0:07:58.882 **** 2026-02-04 04:50:19.603228 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603239 | orchestrator | 2026-02-04 04:50:19.603250 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-04 04:50:19.603261 | orchestrator | Wednesday 04 February 2026 04:49:36 +0000 (0:00:01.200) 0:08:00.082 **** 2026-02-04 04:50:19.603272 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603283 | orchestrator | 2026-02-04 04:50:19.603295 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-04 04:50:19.603306 | orchestrator | Wednesday 04 February 2026 04:49:38 +0000 (0:00:01.189) 0:08:01.271 **** 2026-02-04 04:50:19.603317 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603328 | orchestrator | 2026-02-04 04:50:19.603339 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-04 04:50:19.603350 | orchestrator | Wednesday 04 February 2026 04:49:39 +0000 (0:00:01.170) 0:08:02.442 **** 2026-02-04 04:50:19.603361 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:50:19.603373 | orchestrator | 2026-02-04 04:50:19.603385 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-04 04:50:19.603399 | orchestrator | Wednesday 04 February 2026 04:49:41 +0000 (0:00:02.698) 0:08:05.140 **** 2026-02-04 04:50:19.603412 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:50:19.603424 | orchestrator | 2026-02-04 04:50:19.603437 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-04 04:50:19.603451 | orchestrator | Wednesday 04 February 2026 04:49:43 +0000 (0:00:01.131) 0:08:06.272 **** 2026-02-04 04:50:19.603463 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-04 04:50:19.603476 | orchestrator | 2026-02-04 04:50:19.603488 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-04 04:50:19.603501 | orchestrator | Wednesday 04 February 2026 04:49:44 +0000 (0:00:01.464) 0:08:07.736 **** 2026-02-04 04:50:19.603513 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603526 | orchestrator | 2026-02-04 04:50:19.603566 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-04 04:50:19.603579 | orchestrator | Wednesday 04 February 2026 04:49:45 +0000 (0:00:01.175) 0:08:08.912 **** 2026-02-04 04:50:19.603592 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603605 | orchestrator | 2026-02-04 04:50:19.603617 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-04 04:50:19.603630 | orchestrator | Wednesday 04 February 2026 04:49:46 +0000 (0:00:01.189) 0:08:10.101 **** 2026-02-04 04:50:19.603642 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603654 | orchestrator | 2026-02-04 04:50:19.603667 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-04 04:50:19.603680 | orchestrator | Wednesday 04 February 2026 04:49:48 +0000 (0:00:01.142) 0:08:11.243 **** 2026-02-04 04:50:19.603693 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603705 | orchestrator | 2026-02-04 04:50:19.603718 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-04 04:50:19.603731 | orchestrator | Wednesday 04 February 2026 04:49:49 +0000 (0:00:01.144) 0:08:12.388 **** 2026-02-04 04:50:19.603744 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603755 | orchestrator | 2026-02-04 04:50:19.603766 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-04 04:50:19.603777 | orchestrator | Wednesday 04 February 2026 04:49:50 +0000 (0:00:01.142) 0:08:13.531 **** 2026-02-04 04:50:19.603788 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603814 | orchestrator | 2026-02-04 04:50:19.603838 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-04 04:50:19.603849 | orchestrator | Wednesday 04 February 2026 04:49:51 +0000 (0:00:01.176) 0:08:14.707 **** 2026-02-04 04:50:19.603860 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603871 | orchestrator | 2026-02-04 04:50:19.603882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-04 04:50:19.603893 | orchestrator | Wednesday 04 February 2026 04:49:52 +0000 (0:00:01.172) 0:08:15.880 **** 2026-02-04 04:50:19.603904 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.603914 | orchestrator | 2026-02-04 04:50:19.603925 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-04 04:50:19.603936 | orchestrator | Wednesday 04 February 2026 04:49:53 +0000 (0:00:01.230) 0:08:17.111 **** 2026-02-04 04:50:19.603947 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:50:19.603958 | orchestrator | 2026-02-04 04:50:19.603968 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-04 04:50:19.603979 | orchestrator | Wednesday 04 February 2026 04:49:55 +0000 (0:00:01.188) 0:08:18.299 **** 2026-02-04 04:50:19.604014 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-04 04:50:19.604029 | orchestrator | 2026-02-04 04:50:19.604058 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-04 04:50:19.604070 | orchestrator | Wednesday 04 February 2026 04:49:56 +0000 (0:00:01.526) 0:08:19.826 **** 2026-02-04 04:50:19.604122 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-04 04:50:19.604135 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-04 04:50:19.604146 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-04 04:50:19.604157 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-04 04:50:19.604173 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-04 04:50:19.604184 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-04 04:50:19.604194 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-04 04:50:19.604205 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-04 04:50:19.604216 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 04:50:19.604227 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 04:50:19.604237 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 04:50:19.604257 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 04:50:19.604268 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 04:50:19.604279 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 04:50:19.604290 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-04 04:50:19.604301 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-04 04:50:19.604312 | orchestrator | 2026-02-04 04:50:19.604323 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-04 04:50:19.604349 | orchestrator | Wednesday 04 February 2026 04:50:03 +0000 (0:00:06.912) 0:08:26.739 **** 2026-02-04 04:50:19.604360 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604371 | orchestrator | 2026-02-04 04:50:19.604382 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-04 04:50:19.604393 | orchestrator | Wednesday 04 February 2026 04:50:04 +0000 (0:00:01.105) 0:08:27.844 **** 2026-02-04 04:50:19.604404 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604414 | orchestrator | 2026-02-04 04:50:19.604425 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-04 04:50:19.604436 | orchestrator | Wednesday 04 February 2026 04:50:05 +0000 (0:00:01.141) 0:08:28.986 **** 2026-02-04 04:50:19.604447 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604457 | orchestrator | 2026-02-04 04:50:19.604468 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-04 04:50:19.604479 | orchestrator | Wednesday 04 February 2026 04:50:06 +0000 (0:00:01.127) 0:08:30.114 **** 2026-02-04 04:50:19.604489 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604500 | orchestrator | 2026-02-04 04:50:19.604511 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-04 04:50:19.604522 | orchestrator | Wednesday 04 February 2026 04:50:08 +0000 (0:00:01.136) 0:08:31.250 **** 2026-02-04 04:50:19.604542 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604562 | orchestrator | 2026-02-04 04:50:19.604588 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-04 04:50:19.604611 | orchestrator | Wednesday 04 February 2026 04:50:09 +0000 (0:00:01.113) 0:08:32.364 **** 2026-02-04 04:50:19.604630 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604647 | orchestrator | 2026-02-04 04:50:19.604665 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-04 04:50:19.604685 | orchestrator | Wednesday 04 February 2026 04:50:10 +0000 (0:00:01.160) 0:08:33.524 **** 2026-02-04 04:50:19.604703 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604721 | orchestrator | 2026-02-04 04:50:19.604739 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-04 04:50:19.604757 | orchestrator | Wednesday 04 February 2026 04:50:11 +0000 (0:00:01.130) 0:08:34.655 **** 2026-02-04 04:50:19.604776 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604794 | orchestrator | 2026-02-04 04:50:19.604813 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-04 04:50:19.604832 | orchestrator | Wednesday 04 February 2026 04:50:12 +0000 (0:00:01.174) 0:08:35.829 **** 2026-02-04 04:50:19.604852 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604864 | orchestrator | 2026-02-04 04:50:19.604875 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-04 04:50:19.604886 | orchestrator | Wednesday 04 February 2026 04:50:13 +0000 (0:00:01.162) 0:08:36.992 **** 2026-02-04 04:50:19.604896 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604907 | orchestrator | 2026-02-04 04:50:19.604918 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-04 04:50:19.604932 | orchestrator | Wednesday 04 February 2026 04:50:14 +0000 (0:00:01.110) 0:08:38.103 **** 2026-02-04 04:50:19.604950 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.604981 | orchestrator | 2026-02-04 04:50:19.605029 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-04 04:50:19.605049 | orchestrator | Wednesday 04 February 2026 04:50:16 +0000 (0:00:01.169) 0:08:39.273 **** 2026-02-04 04:50:19.605068 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.605087 | orchestrator | 2026-02-04 04:50:19.605106 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-04 04:50:19.605124 | orchestrator | Wednesday 04 February 2026 04:50:17 +0000 (0:00:01.140) 0:08:40.413 **** 2026-02-04 04:50:19.605142 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.605159 | orchestrator | 2026-02-04 04:50:19.605174 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-04 04:50:19.605192 | orchestrator | Wednesday 04 February 2026 04:50:18 +0000 (0:00:01.220) 0:08:41.633 **** 2026-02-04 04:50:19.605211 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:50:19.605228 | orchestrator | 2026-02-04 04:50:19.605263 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-04 04:51:14.952190 | orchestrator | Wednesday 04 February 2026 04:50:19 +0000 (0:00:01.150) 0:08:42.784 **** 2026-02-04 04:51:14.952305 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952324 | orchestrator | 2026-02-04 04:51:14.952337 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-04 04:51:14.952348 | orchestrator | Wednesday 04 February 2026 04:50:20 +0000 (0:00:01.270) 0:08:44.055 **** 2026-02-04 04:51:14.952359 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952370 | orchestrator | 2026-02-04 04:51:14.952397 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-04 04:51:14.952409 | orchestrator | Wednesday 04 February 2026 04:50:21 +0000 (0:00:01.116) 0:08:45.171 **** 2026-02-04 04:51:14.952420 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952430 | orchestrator | 2026-02-04 04:51:14.952442 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 04:51:14.952455 | orchestrator | Wednesday 04 February 2026 04:50:23 +0000 (0:00:01.128) 0:08:46.299 **** 2026-02-04 04:51:14.952466 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952477 | orchestrator | 2026-02-04 04:51:14.952487 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 04:51:14.952498 | orchestrator | Wednesday 04 February 2026 04:50:24 +0000 (0:00:01.119) 0:08:47.419 **** 2026-02-04 04:51:14.952509 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952520 | orchestrator | 2026-02-04 04:51:14.952530 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 04:51:14.952542 | orchestrator | Wednesday 04 February 2026 04:50:25 +0000 (0:00:01.140) 0:08:48.560 **** 2026-02-04 04:51:14.952552 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952563 | orchestrator | 2026-02-04 04:51:14.952575 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 04:51:14.952586 | orchestrator | Wednesday 04 February 2026 04:50:26 +0000 (0:00:01.180) 0:08:49.740 **** 2026-02-04 04:51:14.952597 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952608 | orchestrator | 2026-02-04 04:51:14.952618 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 04:51:14.952629 | orchestrator | Wednesday 04 February 2026 04:50:27 +0000 (0:00:01.146) 0:08:50.887 **** 2026-02-04 04:51:14.952640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 04:51:14.952652 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 04:51:14.952665 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 04:51:14.952678 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952690 | orchestrator | 2026-02-04 04:51:14.952703 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 04:51:14.952716 | orchestrator | Wednesday 04 February 2026 04:50:29 +0000 (0:00:01.419) 0:08:52.306 **** 2026-02-04 04:51:14.952730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 04:51:14.952765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 04:51:14.952778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 04:51:14.952791 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952803 | orchestrator | 2026-02-04 04:51:14.952816 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 04:51:14.952829 | orchestrator | Wednesday 04 February 2026 04:50:30 +0000 (0:00:01.433) 0:08:53.739 **** 2026-02-04 04:51:14.952841 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 04:51:14.952854 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 04:51:14.952866 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 04:51:14.952879 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952892 | orchestrator | 2026-02-04 04:51:14.952905 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 04:51:14.952917 | orchestrator | Wednesday 04 February 2026 04:50:31 +0000 (0:00:01.434) 0:08:55.174 **** 2026-02-04 04:51:14.952930 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.952942 | orchestrator | 2026-02-04 04:51:14.952955 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 04:51:14.952967 | orchestrator | Wednesday 04 February 2026 04:50:33 +0000 (0:00:01.125) 0:08:56.300 **** 2026-02-04 04:51:14.952980 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-04 04:51:14.952993 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.953034 | orchestrator | 2026-02-04 04:51:14.953048 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-04 04:51:14.953059 | orchestrator | Wednesday 04 February 2026 04:50:34 +0000 (0:00:01.402) 0:08:57.703 **** 2026-02-04 04:51:14.953070 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:51:14.953080 | orchestrator | 2026-02-04 04:51:14.953091 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-04 04:51:14.953102 | orchestrator | Wednesday 04 February 2026 04:50:36 +0000 (0:00:01.823) 0:08:59.527 **** 2026-02-04 04:51:14.953113 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953124 | orchestrator | 2026-02-04 04:51:14.953134 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-04 04:51:14.953145 | orchestrator | Wednesday 04 February 2026 04:50:37 +0000 (0:00:01.165) 0:09:00.692 **** 2026-02-04 04:51:14.953156 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-04 04:51:14.953167 | orchestrator | 2026-02-04 04:51:14.953178 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-04 04:51:14.953189 | orchestrator | Wednesday 04 February 2026 04:50:39 +0000 (0:00:01.647) 0:09:02.340 **** 2026-02-04 04:51:14.953200 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-04 04:51:14.953211 | orchestrator | 2026-02-04 04:51:14.953221 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-04 04:51:14.953232 | orchestrator | Wednesday 04 February 2026 04:50:42 +0000 (0:00:03.477) 0:09:05.817 **** 2026-02-04 04:51:14.953243 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.953254 | orchestrator | 2026-02-04 04:51:14.953282 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-04 04:51:14.953294 | orchestrator | Wednesday 04 February 2026 04:50:43 +0000 (0:00:01.245) 0:09:07.063 **** 2026-02-04 04:51:14.953305 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953315 | orchestrator | 2026-02-04 04:51:14.953326 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-04 04:51:14.953343 | orchestrator | Wednesday 04 February 2026 04:50:45 +0000 (0:00:01.141) 0:09:08.204 **** 2026-02-04 04:51:14.953354 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953365 | orchestrator | 2026-02-04 04:51:14.953376 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-04 04:51:14.953386 | orchestrator | Wednesday 04 February 2026 04:50:46 +0000 (0:00:01.166) 0:09:09.371 **** 2026-02-04 04:51:14.953405 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:51:14.953416 | orchestrator | 2026-02-04 04:51:14.953426 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-04 04:51:14.953437 | orchestrator | Wednesday 04 February 2026 04:50:48 +0000 (0:00:02.078) 0:09:11.450 **** 2026-02-04 04:51:14.953448 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953458 | orchestrator | 2026-02-04 04:51:14.953469 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-04 04:51:14.953480 | orchestrator | Wednesday 04 February 2026 04:50:49 +0000 (0:00:01.632) 0:09:13.083 **** 2026-02-04 04:51:14.953490 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953501 | orchestrator | 2026-02-04 04:51:14.953511 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-04 04:51:14.953522 | orchestrator | Wednesday 04 February 2026 04:50:51 +0000 (0:00:01.494) 0:09:14.578 **** 2026-02-04 04:51:14.953533 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953543 | orchestrator | 2026-02-04 04:51:14.953554 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-04 04:51:14.953565 | orchestrator | Wednesday 04 February 2026 04:50:52 +0000 (0:00:01.493) 0:09:16.072 **** 2026-02-04 04:51:14.953575 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953586 | orchestrator | 2026-02-04 04:51:14.953596 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-04 04:51:14.953607 | orchestrator | Wednesday 04 February 2026 04:50:54 +0000 (0:00:01.760) 0:09:17.833 **** 2026-02-04 04:51:14.953618 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953629 | orchestrator | 2026-02-04 04:51:14.953639 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-04 04:51:14.953650 | orchestrator | Wednesday 04 February 2026 04:50:56 +0000 (0:00:01.765) 0:09:19.599 **** 2026-02-04 04:51:14.953661 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-04 04:51:14.953672 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 04:51:14.953683 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 04:51:14.953693 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-04 04:51:14.953704 | orchestrator | 2026-02-04 04:51:14.953715 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-04 04:51:14.953726 | orchestrator | Wednesday 04 February 2026 04:51:00 +0000 (0:00:03.942) 0:09:23.541 **** 2026-02-04 04:51:14.953736 | orchestrator | changed: [testbed-node-0] 2026-02-04 04:51:14.953747 | orchestrator | 2026-02-04 04:51:14.953757 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-04 04:51:14.953768 | orchestrator | Wednesday 04 February 2026 04:51:02 +0000 (0:00:02.151) 0:09:25.693 **** 2026-02-04 04:51:14.953779 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953790 | orchestrator | 2026-02-04 04:51:14.953800 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-04 04:51:14.953811 | orchestrator | Wednesday 04 February 2026 04:51:03 +0000 (0:00:01.134) 0:09:26.827 **** 2026-02-04 04:51:14.953822 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953832 | orchestrator | 2026-02-04 04:51:14.953843 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-04 04:51:14.953854 | orchestrator | Wednesday 04 February 2026 04:51:04 +0000 (0:00:01.169) 0:09:27.997 **** 2026-02-04 04:51:14.953864 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953875 | orchestrator | 2026-02-04 04:51:14.953886 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-04 04:51:14.953896 | orchestrator | Wednesday 04 February 2026 04:51:06 +0000 (0:00:02.184) 0:09:30.182 **** 2026-02-04 04:51:14.953907 | orchestrator | ok: [testbed-node-0] 2026-02-04 04:51:14.953918 | orchestrator | 2026-02-04 04:51:14.953929 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-04 04:51:14.953939 | orchestrator | Wednesday 04 February 2026 04:51:08 +0000 (0:00:01.511) 0:09:31.694 **** 2026-02-04 04:51:14.953956 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.953967 | orchestrator | 2026-02-04 04:51:14.953978 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-04 04:51:14.953989 | orchestrator | Wednesday 04 February 2026 04:51:09 +0000 (0:00:01.119) 0:09:32.814 **** 2026-02-04 04:51:14.954083 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-04 04:51:14.954097 | orchestrator | 2026-02-04 04:51:14.954108 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-04 04:51:14.954119 | orchestrator | Wednesday 04 February 2026 04:51:11 +0000 (0:00:01.496) 0:09:34.310 **** 2026-02-04 04:51:14.954129 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.954140 | orchestrator | 2026-02-04 04:51:14.954151 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-04 04:51:14.954162 | orchestrator | Wednesday 04 February 2026 04:51:12 +0000 (0:00:01.128) 0:09:35.438 **** 2026-02-04 04:51:14.954173 | orchestrator | skipping: [testbed-node-0] 2026-02-04 04:51:14.954184 | orchestrator | 2026-02-04 04:51:14.954195 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-04 04:51:14.954206 | orchestrator | Wednesday 04 February 2026 04:51:13 +0000 (0:00:01.147) 0:09:36.586 **** 2026-02-04 04:51:14.954216 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-04 04:51:14.954227 | orchestrator | 2026-02-04 04:51:14.954247 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-04 05:05:36.967414 | orchestrator | Wednesday 04 February 2026 04:51:14 +0000 (0:00:01.548) 0:09:38.134 **** 2026-02-04 05:05:36.967535 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:05:36.967553 | orchestrator | 2026-02-04 05:05:36.967566 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-04 05:05:36.967578 | orchestrator | Wednesday 04 February 2026 04:51:17 +0000 (0:00:02.362) 0:09:40.497 **** 2026-02-04 05:05:36.967606 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:05:36.967619 | orchestrator | 2026-02-04 05:05:36.967630 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-04 05:05:36.967642 | orchestrator | Wednesday 04 February 2026 04:51:19 +0000 (0:00:02.024) 0:09:42.522 **** 2026-02-04 05:05:36.967653 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:05:36.967664 | orchestrator | 2026-02-04 05:05:36.967675 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-04 05:05:36.967687 | orchestrator | Wednesday 04 February 2026 04:51:21 +0000 (0:00:02.571) 0:09:45.094 **** 2026-02-04 05:05:36.967698 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:05:36.967709 | orchestrator | 2026-02-04 05:05:36.967721 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-04 05:05:36.967732 | orchestrator | Wednesday 04 February 2026 04:51:25 +0000 (0:00:03.448) 0:09:48.542 **** 2026-02-04 05:05:36.967744 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-04 05:05:36.967756 | orchestrator | 2026-02-04 05:05:36.967767 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-04 05:05:36.967778 | orchestrator | Wednesday 04 February 2026 04:51:27 +0000 (0:00:01.736) 0:09:50.279 **** 2026-02-04 05:05:36.967789 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:05:36.967801 | orchestrator | 2026-02-04 05:05:36.967812 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-04 05:05:36.967823 | orchestrator | Wednesday 04 February 2026 04:51:29 +0000 (0:00:02.277) 0:09:52.556 **** 2026-02-04 05:05:36.967834 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:05:36.967846 | orchestrator | 2026-02-04 05:05:36.967857 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-04 05:05:36.967868 | orchestrator | Wednesday 04 February 2026 04:51:32 +0000 (0:00:03.010) 0:09:55.567 **** 2026-02-04 05:05:36.967879 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:05:36.967890 | orchestrator | 2026-02-04 05:05:36.967902 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-04 05:05:36.967934 | orchestrator | Wednesday 04 February 2026 04:51:33 +0000 (0:00:01.129) 0:09:56.696 **** 2026-02-04 05:05:36.967966 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-04 05:05:36.968100 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-04 05:05:36.968119 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-04 05:05:36.968133 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-04 05:05:36.968148 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-04 05:05:36.968163 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__1cd1632a1150deff3ad190c64314155d14045454'}])  2026-02-04 05:05:36.968178 | orchestrator | 2026-02-04 05:05:36.968210 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-04 05:05:36.968224 | orchestrator | Wednesday 04 February 2026 04:51:43 +0000 (0:00:10.082) 0:10:06.779 **** 2026-02-04 05:05:36.968237 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:05:36.968250 | orchestrator | 2026-02-04 05:05:36.968263 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 05:05:36.968283 | orchestrator | Wednesday 04 February 2026 04:51:46 +0000 (0:00:02.598) 0:10:09.377 **** 2026-02-04 05:05:36.968297 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:05:36.968316 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 05:05:36.968334 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 05:05:36.968346 | orchestrator | 2026-02-04 05:05:36.968356 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 05:05:36.968368 | orchestrator | Wednesday 04 February 2026 04:51:48 +0000 (0:00:02.259) 0:10:11.637 **** 2026-02-04 05:05:36.968378 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 05:05:36.968390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 05:05:36.968401 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 05:05:36.968411 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:05:36.968422 | orchestrator | 2026-02-04 05:05:36.968444 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-04 05:05:36.968455 | orchestrator | Wednesday 04 February 2026 04:51:49 +0000 (0:00:01.445) 0:10:13.082 **** 2026-02-04 05:05:36.968466 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:05:36.968477 | orchestrator | 2026-02-04 05:05:36.968488 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-04 05:05:36.968499 | orchestrator | Wednesday 04 February 2026 04:51:51 +0000 (0:00:01.164) 0:10:14.247 **** 2026-02-04 05:05:36.968510 | orchestrator | 2026-02-04 05:05:36.968521 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968533 | orchestrator | 2026-02-04 05:05:36.968544 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968554 | orchestrator | 2026-02-04 05:05:36.968565 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968576 | orchestrator | 2026-02-04 05:05:36.968587 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968598 | orchestrator | 2026-02-04 05:05:36.968610 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968620 | orchestrator | 2026-02-04 05:05:36.968631 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968643 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-02-04 05:05:36.968655 | orchestrator | 2026-02-04 05:05:36.968666 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968677 | orchestrator | 2026-02-04 05:05:36.968688 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968699 | orchestrator | 2026-02-04 05:05:36.968709 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968720 | orchestrator | 2026-02-04 05:05:36.968731 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968742 | orchestrator | 2026-02-04 05:05:36.968753 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968764 | orchestrator | 2026-02-04 05:05:36.968775 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968786 | orchestrator | 2026-02-04 05:05:36.968797 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968808 | orchestrator | 2026-02-04 05:05:36.968819 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968830 | orchestrator | 2026-02-04 05:05:36.968840 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968851 | orchestrator | 2026-02-04 05:05:36.968862 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968873 | orchestrator | 2026-02-04 05:05:36.968884 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968896 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-02-04 05:05:36.968906 | orchestrator | 2026-02-04 05:05:36.968917 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968928 | orchestrator | 2026-02-04 05:05:36.968939 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.968950 | orchestrator | 2026-02-04 05:05:36.968967 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.969020 | orchestrator | 2026-02-04 05:05:36.969032 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.969043 | orchestrator | 2026-02-04 05:05:36.969054 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.969064 | orchestrator | 2026-02-04 05:05:36.969075 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:05:36.969086 | orchestrator | 2026-02-04 05:05:36.969104 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784212 | orchestrator | 2026-02-04 05:23:17.784334 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784353 | orchestrator | 2026-02-04 05:23:17.784366 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784377 | orchestrator | 2026-02-04 05:23:17.784406 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784419 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-02-04 05:23:17.784432 | orchestrator | 2026-02-04 05:23:17.784498 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784511 | orchestrator | 2026-02-04 05:23:17.784522 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784533 | orchestrator | 2026-02-04 05:23:17.784544 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784555 | orchestrator | 2026-02-04 05:23:17.784565 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784576 | orchestrator | 2026-02-04 05:23:17.784587 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784597 | orchestrator | 2026-02-04 05:23:17.784608 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784619 | orchestrator | 2026-02-04 05:23:17.784629 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784640 | orchestrator | 2026-02-04 05:23:17.784651 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784662 | orchestrator | 2026-02-04 05:23:17.784673 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784683 | orchestrator | 2026-02-04 05:23:17.784694 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784705 | orchestrator | 2026-02-04 05:23:17.784716 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784727 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-02-04 05:23:17.784739 | orchestrator | 2026-02-04 05:23:17.784752 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784765 | orchestrator | 2026-02-04 05:23:17.784778 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784791 | orchestrator | 2026-02-04 05:23:17.784804 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784815 | orchestrator | 2026-02-04 05:23:17.784828 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784865 | orchestrator | 2026-02-04 05:23:17.784878 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784891 | orchestrator | 2026-02-04 05:23:17.784903 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784913 | orchestrator | 2026-02-04 05:23:17.784924 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784935 | orchestrator | 2026-02-04 05:23:17.784945 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784960 | orchestrator | 2026-02-04 05:23:17.784979 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.784998 | orchestrator | 2026-02-04 05:23:17.785016 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785036 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-02-04 05:23:17.785054 | orchestrator | 2026-02-04 05:23:17.785072 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785083 | orchestrator | 2026-02-04 05:23:17.785094 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785104 | orchestrator | 2026-02-04 05:23:17.785115 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785126 | orchestrator | 2026-02-04 05:23:17.785136 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785147 | orchestrator | 2026-02-04 05:23:17.785162 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785181 | orchestrator | 2026-02-04 05:23:17.785199 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785216 | orchestrator | 2026-02-04 05:23:17.785234 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785253 | orchestrator | 2026-02-04 05:23:17.785271 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785291 | orchestrator | 2026-02-04 05:23:17.785309 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785328 | orchestrator | 2026-02-04 05:23:17.785374 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785387 | orchestrator | 2026-02-04 05:23:17.785398 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 05:23:17.785420 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.10", "quorum_status", "--format", "json"], "delta": "0:05:00.309788", "end": "2026-02-04 05:23:10.043758", "msg": "non-zero return code", "rc": 1, "start": "2026-02-04 05:18:09.733970", "stderr": "2026-02-04T05:23:10.022+0000 70e9b524a640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-02-04T05:23:10.022+0000 70e9b524a640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-02-04 05:23:17.785435 | orchestrator | 2026-02-04 05:23:17.785468 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-02-04 05:23:17.785480 | orchestrator | Wednesday 04 February 2026 05:23:11 +0000 (0:31:20.603) 0:41:34.851 **** 2026-02-04 05:23:17.785491 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:23:17.785514 | orchestrator | 2026-02-04 05:23:17.785525 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-02-04 05:23:17.785536 | orchestrator | Wednesday 04 February 2026 05:23:13 +0000 (0:00:01.761) 0:41:36.613 **** 2026-02-04 05:23:17.785547 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:23:17.785557 | orchestrator | 2026-02-04 05:23:17.785568 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-02-04 05:23:17.785579 | orchestrator | Wednesday 04 February 2026 05:23:15 +0000 (0:00:01.754) 0:41:38.367 **** 2026-02-04 05:23:17.785590 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-02-04 05:23:17.785602 | orchestrator | 2026-02-04 05:23:17.785612 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 05:23:17.785623 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 05:23:17.785635 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-04 05:23:17.785646 | orchestrator | testbed-node-0 : ok=121  changed=10  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-02-04 05:23:17.785658 | orchestrator | testbed-node-1 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-04 05:23:17.785669 | orchestrator | testbed-node-2 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-04 05:23:17.785687 | orchestrator | testbed-node-3 : ok=33  changed=2  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-02-04 05:23:17.785705 | orchestrator | testbed-node-4 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-04 05:23:17.785725 | orchestrator | testbed-node-5 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-04 05:23:17.785742 | orchestrator | 2026-02-04 05:23:17.785761 | orchestrator | 2026-02-04 05:23:17.785773 | orchestrator | 2026-02-04 05:23:17.785784 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 05:23:17.785811 | orchestrator | Wednesday 04 February 2026 05:23:17 +0000 (0:00:02.583) 0:41:40.951 **** 2026-02-04 05:23:17.785833 | orchestrator | =============================================================================== 2026-02-04 05:23:17.785844 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1880.60s 2026-02-04 05:23:17.785855 | orchestrator | Gather and delegate facts ---------------------------------------------- 33.40s 2026-02-04 05:23:17.785866 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.37s 2026-02-04 05:23:17.785876 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 12.06s 2026-02-04 05:23:17.785887 | orchestrator | Set cluster configs ---------------------------------------------------- 10.83s 2026-02-04 05:23:17.785898 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.08s 2026-02-04 05:23:17.785908 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.91s 2026-02-04 05:23:17.785919 | orchestrator | Gather facts ------------------------------------------------------------ 6.16s 2026-02-04 05:23:17.785929 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 5.20s 2026-02-04 05:23:17.785940 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 4.30s 2026-02-04 05:23:17.785959 | orchestrator | Stop ceph mon ----------------------------------------------------------- 4.08s 2026-02-04 05:23:18.514207 | orchestrator | 2026-02-04 05:23:18 | INFO  | Task a5e3e43a-6159-4fcd-bd3b-50b9705967e3 (ceph-rolling_update) was prepared for execution. 2026-02-04 05:23:18.514301 | orchestrator | 2026-02-04 05:23:18 | INFO  | It takes a moment until task a5e3e43a-6159-4fcd-bd3b-50b9705967e3 (ceph-rolling_update) has been started and output is visible here. 2026-02-04 05:24:40.748242 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.94s 2026-02-04 05:24:40.748387 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 3.48s 2026-02-04 05:24:40.748412 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 3.45s 2026-02-04 05:24:40.748431 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.32s 2026-02-04 05:24:40.748448 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.28s 2026-02-04 05:24:40.748466 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.28s 2026-02-04 05:24:40.748483 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 3.21s 2026-02-04 05:24:40.748502 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 3.06s 2026-02-04 05:24:40.748587 | orchestrator | ceph-validate : Include check_system.yml -------------------------------- 3.01s 2026-02-04 05:24:40.748608 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-04 05:24:40.748629 | orchestrator | 2.16.14 2026-02-04 05:24:40.748648 | orchestrator | 2026-02-04 05:24:40.748666 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-04 05:24:40.748684 | orchestrator | 2026-02-04 05:24:40.748703 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-04 05:24:40.748723 | orchestrator | Wednesday 04 February 2026 05:23:26 +0000 (0:00:01.557) 0:00:01.557 **** 2026-02-04 05:24:40.748743 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-04 05:24:40.748765 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-04 05:24:40.748789 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-04 05:24:40.748812 | orchestrator | skipping: [localhost] 2026-02-04 05:24:40.748834 | orchestrator | 2026-02-04 05:24:40.748856 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-04 05:24:40.748879 | orchestrator | 2026-02-04 05:24:40.748901 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-04 05:24:40.748924 | orchestrator | Wednesday 04 February 2026 05:23:27 +0000 (0:00:01.697) 0:00:03.255 **** 2026-02-04 05:24:40.748946 | orchestrator | ok: [testbed-node-0] => { 2026-02-04 05:24:40.748970 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 05:24:40.748991 | orchestrator | } 2026-02-04 05:24:40.749014 | orchestrator | ok: [testbed-node-1] => { 2026-02-04 05:24:40.749036 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 05:24:40.749058 | orchestrator | } 2026-02-04 05:24:40.749078 | orchestrator | ok: [testbed-node-2] => { 2026-02-04 05:24:40.749098 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 05:24:40.749118 | orchestrator | } 2026-02-04 05:24:40.749138 | orchestrator | ok: [testbed-node-3] => { 2026-02-04 05:24:40.749159 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 05:24:40.749179 | orchestrator | } 2026-02-04 05:24:40.749199 | orchestrator | ok: [testbed-node-4] => { 2026-02-04 05:24:40.749218 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 05:24:40.749238 | orchestrator | } 2026-02-04 05:24:40.749259 | orchestrator | ok: [testbed-node-5] => { 2026-02-04 05:24:40.749279 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 05:24:40.749299 | orchestrator | } 2026-02-04 05:24:40.749320 | orchestrator | ok: [testbed-manager] => { 2026-02-04 05:24:40.749340 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-04 05:24:40.749397 | orchestrator | } 2026-02-04 05:24:40.749416 | orchestrator | 2026-02-04 05:24:40.749435 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-04 05:24:40.749453 | orchestrator | Wednesday 04 February 2026 05:23:33 +0000 (0:00:05.334) 0:00:08.590 **** 2026-02-04 05:24:40.749471 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:24:40.749490 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:24:40.749508 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:24:40.749553 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:24:40.749572 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:24:40.749589 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:24:40.749606 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.749623 | orchestrator | 2026-02-04 05:24:40.749640 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-04 05:24:40.749657 | orchestrator | Wednesday 04 February 2026 05:23:39 +0000 (0:00:06.367) 0:00:14.958 **** 2026-02-04 05:24:40.749676 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 05:24:40.749693 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:24:40.749711 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:24:40.749728 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 05:24:40.749745 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 05:24:40.749762 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 05:24:40.749779 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:24:40.749796 | orchestrator | 2026-02-04 05:24:40.749813 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-04 05:24:40.749831 | orchestrator | Wednesday 04 February 2026 05:24:15 +0000 (0:00:36.388) 0:00:51.347 **** 2026-02-04 05:24:40.749847 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.749864 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.749882 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:24:40.749898 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:24:40.749916 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:24:40.749954 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:24:40.749971 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.749990 | orchestrator | 2026-02-04 05:24:40.750199 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 05:24:40.750233 | orchestrator | Wednesday 04 February 2026 05:24:18 +0000 (0:00:02.153) 0:00:53.500 **** 2026-02-04 05:24:40.750253 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-04 05:24:40.750275 | orchestrator | 2026-02-04 05:24:40.750294 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 05:24:40.750314 | orchestrator | Wednesday 04 February 2026 05:24:20 +0000 (0:00:02.721) 0:00:56.222 **** 2026-02-04 05:24:40.750332 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:24:40.750350 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.750361 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.750371 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:24:40.750382 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:24:40.750392 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:24:40.750403 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.750413 | orchestrator | 2026-02-04 05:24:40.750424 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 05:24:40.750435 | orchestrator | Wednesday 04 February 2026 05:24:23 +0000 (0:00:02.567) 0:00:58.789 **** 2026-02-04 05:24:40.750445 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.750456 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.750466 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:24:40.750494 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:24:40.750505 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:24:40.750555 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:24:40.750568 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.750579 | orchestrator | 2026-02-04 05:24:40.750590 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 05:24:40.750601 | orchestrator | Wednesday 04 February 2026 05:24:25 +0000 (0:00:01.941) 0:01:00.730 **** 2026-02-04 05:24:40.750611 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.750622 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.750632 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:24:40.750643 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:24:40.750653 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:24:40.750663 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:24:40.750674 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.750685 | orchestrator | 2026-02-04 05:24:40.750696 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 05:24:40.750706 | orchestrator | Wednesday 04 February 2026 05:24:27 +0000 (0:00:02.548) 0:01:03.279 **** 2026-02-04 05:24:40.750717 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.750727 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.750738 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:24:40.750748 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:24:40.750759 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:24:40.750769 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:24:40.750780 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.750791 | orchestrator | 2026-02-04 05:24:40.750801 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 05:24:40.750812 | orchestrator | Wednesday 04 February 2026 05:24:29 +0000 (0:00:01.891) 0:01:05.171 **** 2026-02-04 05:24:40.750822 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.750833 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.750843 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:24:40.750854 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:24:40.750864 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:24:40.750875 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:24:40.750885 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.750896 | orchestrator | 2026-02-04 05:24:40.750907 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 05:24:40.750917 | orchestrator | Wednesday 04 February 2026 05:24:32 +0000 (0:00:02.279) 0:01:07.451 **** 2026-02-04 05:24:40.750928 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.750938 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.750949 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:24:40.750959 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:24:40.750970 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:24:40.750980 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:24:40.750991 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.751001 | orchestrator | 2026-02-04 05:24:40.751012 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 05:24:40.751023 | orchestrator | Wednesday 04 February 2026 05:24:33 +0000 (0:00:01.974) 0:01:09.425 **** 2026-02-04 05:24:40.751034 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:24:40.751045 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:24:40.751056 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:24:40.751066 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:24:40.751077 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:24:40.751088 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:24:40.751098 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:24:40.751109 | orchestrator | 2026-02-04 05:24:40.751119 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 05:24:40.751130 | orchestrator | Wednesday 04 February 2026 05:24:36 +0000 (0:00:02.307) 0:01:11.732 **** 2026-02-04 05:24:40.751141 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.751151 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.751162 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:24:40.751180 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:24:40.751191 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:24:40.751201 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:24:40.751212 | orchestrator | ok: [testbed-manager] 2026-02-04 05:24:40.751223 | orchestrator | 2026-02-04 05:24:40.751233 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 05:24:40.751244 | orchestrator | Wednesday 04 February 2026 05:24:38 +0000 (0:00:02.150) 0:01:13.883 **** 2026-02-04 05:24:40.751255 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:24:40.751266 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:24:40.751276 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:24:40.751287 | orchestrator | 2026-02-04 05:24:40.751297 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 05:24:40.751318 | orchestrator | Wednesday 04 February 2026 05:24:40 +0000 (0:00:01.706) 0:01:15.589 **** 2026-02-04 05:24:40.751329 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:24:40.751340 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:24:40.751362 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:25:06.345260 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:25:06.345345 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:25:06.345353 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:25:06.345361 | orchestrator | ok: [testbed-manager] 2026-02-04 05:25:06.345367 | orchestrator | 2026-02-04 05:25:06.345375 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 05:25:06.345383 | orchestrator | Wednesday 04 February 2026 05:24:42 +0000 (0:00:02.221) 0:01:17.811 **** 2026-02-04 05:25:06.345390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:25:06.345397 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:25:06.345406 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:25:06.345410 | orchestrator | 2026-02-04 05:25:06.345414 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 05:25:06.345418 | orchestrator | Wednesday 04 February 2026 05:24:45 +0000 (0:00:03.354) 0:01:21.166 **** 2026-02-04 05:25:06.345423 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 05:25:06.345429 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 05:25:06.345435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 05:25:06.345442 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:06.345448 | orchestrator | 2026-02-04 05:25:06.345454 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 05:25:06.345458 | orchestrator | Wednesday 04 February 2026 05:24:47 +0000 (0:00:01.430) 0:01:22.596 **** 2026-02-04 05:25:06.345464 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 05:25:06.345470 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 05:25:06.345475 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 05:25:06.345479 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:06.345483 | orchestrator | 2026-02-04 05:25:06.345486 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 05:25:06.345490 | orchestrator | Wednesday 04 February 2026 05:24:49 +0000 (0:00:01.889) 0:01:24.486 **** 2026-02-04 05:25:06.345512 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:06.345518 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:06.345522 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:06.345526 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:06.345530 | orchestrator | 2026-02-04 05:25:06.345638 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 05:25:06.345651 | orchestrator | Wednesday 04 February 2026 05:24:50 +0000 (0:00:01.166) 0:01:25.652 **** 2026-02-04 05:25:06.345678 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f8b4daebdb0f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 05:24:43.013158', 'end': '2026-02-04 05:24:43.078448', 'delta': '0:00:00.065290', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f8b4daebdb0f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-04 05:25:06.345685 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e8207b686900', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 05:24:43.881884', 'end': '2026-02-04 05:24:43.928678', 'delta': '0:00:00.046794', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8207b686900'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-04 05:25:06.345689 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c48be97cec44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 05:24:44.426099', 'end': '2026-02-04 05:24:44.483736', 'delta': '0:00:00.057637', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c48be97cec44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-04 05:25:06.345693 | orchestrator | 2026-02-04 05:25:06.345697 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 05:25:06.345706 | orchestrator | Wednesday 04 February 2026 05:24:51 +0000 (0:00:01.238) 0:01:26.891 **** 2026-02-04 05:25:06.345710 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:25:06.345714 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:25:06.345717 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:25:06.345721 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:25:06.345725 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:25:06.345728 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:25:06.345732 | orchestrator | ok: [testbed-manager] 2026-02-04 05:25:06.345736 | orchestrator | 2026-02-04 05:25:06.345740 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 05:25:06.345743 | orchestrator | Wednesday 04 February 2026 05:24:53 +0000 (0:00:02.183) 0:01:29.074 **** 2026-02-04 05:25:06.345747 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:06.345751 | orchestrator | 2026-02-04 05:25:06.345755 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 05:25:06.345758 | orchestrator | Wednesday 04 February 2026 05:24:54 +0000 (0:00:01.247) 0:01:30.321 **** 2026-02-04 05:25:06.345762 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:25:06.345766 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:25:06.345770 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:25:06.345773 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:25:06.345777 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:25:06.345781 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:25:06.345785 | orchestrator | ok: [testbed-manager] 2026-02-04 05:25:06.345788 | orchestrator | 2026-02-04 05:25:06.345792 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 05:25:06.345796 | orchestrator | Wednesday 04 February 2026 05:24:57 +0000 (0:00:02.236) 0:01:32.558 **** 2026-02-04 05:25:06.345799 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:25:06.345803 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-04 05:25:06.345807 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-04 05:25:06.345811 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-04 05:25:06.345814 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-04 05:25:06.345818 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-04 05:25:06.345822 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-04 05:25:06.345826 | orchestrator | 2026-02-04 05:25:06.345829 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 05:25:06.345833 | orchestrator | Wednesday 04 February 2026 05:25:01 +0000 (0:00:04.325) 0:01:36.883 **** 2026-02-04 05:25:06.345837 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:25:06.345841 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:25:06.345846 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:25:06.345850 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:25:06.345855 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:25:06.345859 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:25:06.345864 | orchestrator | ok: [testbed-manager] 2026-02-04 05:25:06.345868 | orchestrator | 2026-02-04 05:25:06.345873 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 05:25:06.345877 | orchestrator | Wednesday 04 February 2026 05:25:03 +0000 (0:00:02.391) 0:01:39.275 **** 2026-02-04 05:25:06.345882 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:06.345897 | orchestrator | 2026-02-04 05:25:06.345902 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 05:25:06.345909 | orchestrator | Wednesday 04 February 2026 05:25:05 +0000 (0:00:01.165) 0:01:40.441 **** 2026-02-04 05:25:06.345913 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:06.345918 | orchestrator | 2026-02-04 05:25:06.345925 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 05:25:22.093983 | orchestrator | Wednesday 04 February 2026 05:25:06 +0000 (0:00:01.319) 0:01:41.760 **** 2026-02-04 05:25:22.094173 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:22.094218 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:22.094230 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:22.094240 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:22.094250 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:22.094259 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:22.094269 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:22.094279 | orchestrator | 2026-02-04 05:25:22.094290 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 05:25:22.094300 | orchestrator | Wednesday 04 February 2026 05:25:08 +0000 (0:00:02.476) 0:01:44.237 **** 2026-02-04 05:25:22.094310 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:22.094320 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:22.094330 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:22.094339 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:22.094349 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:22.094359 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:22.094369 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:22.094378 | orchestrator | 2026-02-04 05:25:22.094393 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 05:25:22.094409 | orchestrator | Wednesday 04 February 2026 05:25:11 +0000 (0:00:02.196) 0:01:46.433 **** 2026-02-04 05:25:22.094425 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:22.094441 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:22.094458 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:22.094474 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:22.094491 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:22.094508 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:22.094524 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:22.094536 | orchestrator | 2026-02-04 05:25:22.094575 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 05:25:22.094588 | orchestrator | Wednesday 04 February 2026 05:25:13 +0000 (0:00:02.177) 0:01:48.611 **** 2026-02-04 05:25:22.094600 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:22.094612 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:22.094623 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:22.094634 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:22.094646 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:22.094658 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:22.094669 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:22.094679 | orchestrator | 2026-02-04 05:25:22.094688 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 05:25:22.094698 | orchestrator | Wednesday 04 February 2026 05:25:15 +0000 (0:00:02.329) 0:01:50.941 **** 2026-02-04 05:25:22.094707 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:22.094717 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:22.094726 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:22.094735 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:22.094745 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:22.094757 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:22.094774 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:22.094790 | orchestrator | 2026-02-04 05:25:22.094806 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 05:25:22.094822 | orchestrator | Wednesday 04 February 2026 05:25:17 +0000 (0:00:02.220) 0:01:53.162 **** 2026-02-04 05:25:22.094839 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:22.094856 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:22.094871 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:22.094886 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:22.094896 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:22.094905 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:22.094914 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:22.094924 | orchestrator | 2026-02-04 05:25:22.094933 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 05:25:22.094955 | orchestrator | Wednesday 04 February 2026 05:25:19 +0000 (0:00:01.987) 0:01:55.150 **** 2026-02-04 05:25:22.094965 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:22.094974 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:22.094983 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:22.094993 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:22.095002 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:22.095011 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:22.095021 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:22.095030 | orchestrator | 2026-02-04 05:25:22.095040 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 05:25:22.095049 | orchestrator | Wednesday 04 February 2026 05:25:21 +0000 (0:00:02.198) 0:01:57.349 **** 2026-02-04 05:25:22.095061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.095074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.095120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.095140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 05:25:22.095160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.095178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.095195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.095248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5c0a15c2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:22.462376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 05:25:22.462576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50d185a4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:22.462633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462646 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:22.462653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.462672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.633026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 05:25:22.633131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.633149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.633184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.633217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '853c0bfc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:22.633250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.633264 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:22.633276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.633288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.633313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e', 'dm-uuid-LVM-BggcAryejjvGBF4uvp6BcYG8cW5k2lInqXUvcrL0euXIKDnaXO5lD17ef9ulmfzT'], 'uuids': ['f158fdb8-bb9c-48fc-8ca9-031d13c41132'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '859f82ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT']}})  2026-02-04 05:25:22.633326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811', 'scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10db325f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:22.633338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-LUqg5q-XQXl-4J84-Fu4r-xNUp-Z07d-jQvh8Z', 'scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388', 'scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9e979b3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f']}})  2026-02-04 05:25:22.633357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.633377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-19-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 05:25:22.788132 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:22.788143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh', 'dm-uuid-CRYPT-LUKS2-2302d1af8aee4d9d86e1dfe7dfc67d39-8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f', 'dm-uuid-LVM-8XaWcwBldrFACyhn8O8pDrkh8WYfwfMh8YdRgn42SXPKkSSmdqnloX2coya2uTEh'], 'uuids': ['2302d1af-8aee-4d9d-86e1-dfe7dfc67d39'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9e979b3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh']}})  2026-02-04 05:25:22.788196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PkP1x1-WFQe-TRGf-2R1c-oEQv-Qw43-IKwaXF', 'scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40', 'scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '859f82ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e']}})  2026-02-04 05:25:22.788216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e5ab81eb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:22.788254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788278 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT', 'dm-uuid-CRYPT-LUKS2-f158fdb8bb9c48fc8ca9031d13c41132-qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 05:25:22.788291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843', 'dm-uuid-LVM-GuQppvMqMgPM92HHdmch1RUlEtgMK7bAQGkZWEBmxgWBBqnmby4j6kn1XrU8W6rj'], 'uuids': ['18125888-7064-431c-840e-0a8e7e279804'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '87322fe2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj']}})  2026-02-04 05:25:22.995770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23', 'scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0d2f838', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:22.995870 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lVamx9-eYv9-88F9-1eWN-Mo2X-ZvoC-DQM8Qk', 'scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536', 'scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d2cd144', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c']}})  2026-02-04 05:25:22.995887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.995902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.995915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 05:25:22.995944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.995956 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0', 'dm-uuid-CRYPT-LUKS2-81c8120205304967adb7cc6e42b3aaa8-5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 05:25:22.995987 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:22.996018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.996032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c', 'dm-uuid-LVM-jabOFLmF8RS1U4YRftNuTtdThdIFxea35ctI13zu0z0FRbKQORFQtA0W3pu2nuf0'], 'uuids': ['81c81202-0530-4967-adb7-cc6e42b3aaa8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d2cd144', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0']}})  2026-02-04 05:25:22.996045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Bwhrb-Xrjl-JUvU-1GoK-f7aN-SV93-uYzfRx', 'scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd', 'scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '87322fe2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843']}})  2026-02-04 05:25:22.996056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:22.996085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd5a1c69a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:23.120730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:23.120833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:23.120849 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj', 'dm-uuid-CRYPT-LUKS2-181258887064431c840e0a8e7e279804-QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 05:25:23.120864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:23.120876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639', 'dm-uuid-LVM-vz2cv2RninoOpnjrAP98IcdUAgz3XBEESK6kemILvNkP1xNIipyazKS9tR60DcmG'], 'uuids': ['b92d0132-23f4-42dc-a584-a78bf3becacb'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3eb80431', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG']}})  2026-02-04 05:25:23.120907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b', 'scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5de00e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:23.120941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Zb3vde-Jb13-PnWs-XBLv-pqCq-xraX-sEUQHY', 'scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675', 'scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aa7bd7a5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af']}})  2026-02-04 05:25:23.120973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:23.120986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:23.120998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 05:25:23.121009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:23.121021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH', 'dm-uuid-CRYPT-LUKS2-4b38dba5f6644e8da6669b50aa3859a3-VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 05:25:23.121032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:23.121049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af', 'dm-uuid-LVM-jfhjIQs9I12AbVZ4uHpbas8Q8DuoJ56eVvgnpRveGHUC1VWvw0UeAndBY1g45KfH'], 'uuids': ['4b38dba5-f664-4e8d-a666-9b50aa3859a3'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aa7bd7a5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH']}})  2026-02-04 05:25:23.121076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2LO7pB-3JRT-gNDG-CXHX-CXgP-r5lI-kGILdq', 'scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52', 'scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3eb80431', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639']}})  2026-02-04 05:25:24.388648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdb44653', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:24.388799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388846 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG', 'dm-uuid-CRYPT-LUKS2-b92d013223f442dca584a78bf3becacb-SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388860 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:24.388873 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:24.388903 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388915 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388927 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388939 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 05:25:24.388951 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388962 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.388986 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.389008 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0e69a1b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:25:24.568256 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.568335 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:25:24.568346 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:24.568355 | orchestrator | 2026-02-04 05:25:24.568362 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 05:25:24.568388 | orchestrator | Wednesday 04 February 2026 05:25:24 +0000 (0:00:02.446) 0:01:59.795 **** 2026-02-04 05:25:24.568408 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.568418 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.568424 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.568432 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.568452 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.568459 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.568470 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.568483 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5c0a15c2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.568497 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.781941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782122 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:25:24.782160 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782174 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782186 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782198 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-04-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782211 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782243 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782263 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782284 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '50d185a4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1', 'scsi-SQEMU_QEMU_HARDDISK_50d185a4-af79-48b0-8c50-ba5ba990d99d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782298 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:24.782318 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198393 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:25:25.198500 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198538 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198611 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198626 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198639 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198651 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198710 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198734 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '853c0bfc', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1', 'scsi-SQEMU_QEMU_HARDDISK_853c0bfc-16cc-413e-b766-6ae1ea37d859-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198749 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198769 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.198782 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:25:25.198802 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e', 'dm-uuid-LVM-BggcAryejjvGBF4uvp6BcYG8cW5k2lInqXUvcrL0euXIKDnaXO5lD17ef9ulmfzT'], 'uuids': ['f158fdb8-bb9c-48fc-8ca9-031d13c41132'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '859f82ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811', 'scsi-SQEMU_QEMU_HARDDISK_10db325f-6922-4f85-a906-c9ac62af1811'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '10db325f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349167 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-LUqg5q-XQXl-4J84-Fu4r-xNUp-Z07d-jQvh8Z', 'scsi-0QEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388', 'scsi-SQEMU_QEMU_HARDDISK_9e979b3a-dcfc-4e73-af9b-91d41771b388'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '9e979b3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349193 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349225 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-19-59-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349234 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349240 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843', 'dm-uuid-LVM-GuQppvMqMgPM92HHdmch1RUlEtgMK7bAQGkZWEBmxgWBBqnmby4j6kn1XrU8W6rj'], 'uuids': ['18125888-7064-431c-840e-0a8e7e279804'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '87322fe2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349247 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23', 'scsi-SQEMU_QEMU_HARDDISK_e0d2f838-a19f-44eb-bcbc-1b531e772c23'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0d2f838', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349259 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-lVamx9-eYv9-88F9-1eWN-Mo2X-ZvoC-DQM8Qk', 'scsi-0QEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536', 'scsi-SQEMU_QEMU_HARDDISK_6d2cd144-5f23-453e-8510-b2ac8c490536'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6d2cd144', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.349275 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502511 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502670 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh', 'dm-uuid-CRYPT-LUKS2-2302d1af8aee4d9d86e1dfe7dfc67d39-8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502701 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-06-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502710 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502748 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0', 'dm-uuid-CRYPT-LUKS2-81c8120205304967adb7cc6e42b3aaa8-5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502759 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--f48ca6a8--b497--5c65--8a3b--569ec358ef4c-osd--block--f48ca6a8--b497--5c65--8a3b--569ec358ef4c', 'dm-uuid-LVM-jabOFLmF8RS1U4YRftNuTtdThdIFxea35ctI13zu0z0FRbKQORFQtA0W3pu2nuf0'], 'uuids': ['81c81202-0530-4967-adb7-cc6e42b3aaa8'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '6d2cd144', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['5ctI13-zu0z-0FRb-KQOR-FQtA-0W3p-u2nuf0']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502798 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1Bwhrb-Xrjl-JUvU-1GoK-f7aN-SV93-uYzfRx', 'scsi-0QEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd', 'scsi-SQEMU_QEMU_HARDDISK_87322fe2-f6c0-4479-8323-00ed6f38f0dd'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '87322fe2', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8a64378d--205e--5817--b815--b641dc764843-osd--block--8a64378d--205e--5817--b815--b641dc764843']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502810 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.502830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--33635451--34dd--546b--bd98--6f515d7d790f-osd--block--33635451--34dd--546b--bd98--6f515d7d790f', 'dm-uuid-LVM-8XaWcwBldrFACyhn8O8pDrkh8WYfwfMh8YdRgn42SXPKkSSmdqnloX2coya2uTEh'], 'uuids': ['2302d1af-8aee-4d9d-86e1-dfe7dfc67d39'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '9e979b3a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8YdRgn-42SX-PKkS-Smdq-nloX-2coy-a2uTEh']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd5a1c69a', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1', 'scsi-SQEMU_QEMU_HARDDISK_d5a1c69a-e203-43f1-92a7-d53a24ddc92f-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590529 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590595 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-PkP1x1-WFQe-TRGf-2R1c-oEQv-Qw43-IKwaXF', 'scsi-0QEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40', 'scsi-SQEMU_QEMU_HARDDISK_859f82ae-faba-4c56-a83f-b08f511c4f40'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '859f82ae', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--f6bda8a0--a04e--51a6--8ac1--652b1721251e-osd--block--f6bda8a0--a04e--51a6--8ac1--652b1721251e']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590628 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590663 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639', 'dm-uuid-LVM-vz2cv2RninoOpnjrAP98IcdUAgz3XBEESK6kemILvNkP1xNIipyazKS9tR60DcmG'], 'uuids': ['b92d0132-23f4-42dc-a584-a78bf3becacb'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3eb80431', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590675 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj', 'dm-uuid-CRYPT-LUKS2-181258887064431c840e0a8e7e279804-QGkZWE-Bmxg-WBBq-nmby-4j6k-n1Xr-U8W6rj'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590699 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.590712 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:25:25.590733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b', 'scsi-SQEMU_QEMU_HARDDISK_b5de00e7-ee07-4e3d-81c3-372cd77c193b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b5de00e7', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.678747 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.678886 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e5ab81eb', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5ab81eb-29ae-4e69-b67a-37e5644be861-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.678934 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.678979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-Zb3vde-Jb13-PnWs-XBLv-pqCq-xraX-sEUQHY', 'scsi-0QEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675', 'scsi-SQEMU_QEMU_HARDDISK_aa7bd7a5-43b2-4e34-8a80-8d27fcf27675'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'aa7bd7a5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.678993 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.679013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.679025 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.679041 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.679059 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-30-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.679078 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761262 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT', 'dm-uuid-CRYPT-LUKS2-f158fdb8bb9c48fc8ca9031d13c41132-qXUvcr-L0eu-XIKD-naXO-5lD1-7ef9-ulmfzT'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761379 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761394 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-00-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761408 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:25:25.761422 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761480 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761494 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761530 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e0e69a1b', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part16', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part14', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part15', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part1', 'scsi-SQEMU_QEMU_HARDDISK_e0e69a1b-49e5-4bfa-8d89-c757420b8cc5-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH', 'dm-uuid-CRYPT-LUKS2-4b38dba5f6644e8da6669b50aa3859a3-VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761658 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:25.761680 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.266836 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.266970 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af-osd--block--7ab9afb0--5bc3--5f2a--af50--46dbad87a4af', 'dm-uuid-LVM-jfhjIQs9I12AbVZ4uHpbas8Q8DuoJ56eVvgnpRveGHUC1VWvw0UeAndBY1g45KfH'], 'uuids': ['4b38dba5-f664-4e8d-a666-9b50aa3859a3'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'aa7bd7a5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['VvgnpR-veGH-UC1V-Wvw0-UeAn-dBY1-g45KfH']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.266991 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-2LO7pB-3JRT-gNDG-CXHX-CXgP-r5lI-kGILdq', 'scsi-0QEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52', 'scsi-SQEMU_QEMU_HARDDISK_3eb80431-163e-49f3-a2bf-dfaced367a52'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3eb80431', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--43734a2f--bb9f--5443--b704--3f4971f68639-osd--block--43734a2f--bb9f--5443--b704--3f4971f68639']}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.267009 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:25:30.267043 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.267086 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'cdb44653', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1', 'scsi-SQEMU_QEMU_HARDDISK_cdb44653-92bc-471c-ab02-c768f71f0118-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.267123 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.267137 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.267156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG', 'dm-uuid-CRYPT-LUKS2-b92d013223f442dca584a78bf3becacb-SK6kem-ILvN-kP1x-NIip-yazK-S9tR-60DcmG'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:25:30.267187 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:25:30.267230 | orchestrator | 2026-02-04 05:25:30.267244 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 05:25:30.267266 | orchestrator | Wednesday 04 February 2026 05:25:26 +0000 (0:00:02.544) 0:02:02.340 **** 2026-02-04 05:25:30.267277 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:25:30.267289 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:25:30.267299 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:25:30.267310 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:25:30.267320 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:25:30.267331 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:25:30.267342 | orchestrator | ok: [testbed-manager] 2026-02-04 05:25:30.267354 | orchestrator | 2026-02-04 05:25:30.267368 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 05:25:30.267381 | orchestrator | Wednesday 04 February 2026 05:25:29 +0000 (0:00:02.613) 0:02:04.954 **** 2026-02-04 05:25:30.267393 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:25:30.267405 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:25:30.267417 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:25:30.267429 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:25:30.267451 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:26:02.619279 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:26:02.619387 | orchestrator | ok: [testbed-manager] 2026-02-04 05:26:02.619403 | orchestrator | 2026-02-04 05:26:02.619415 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 05:26:02.619426 | orchestrator | Wednesday 04 February 2026 05:25:31 +0000 (0:00:01.957) 0:02:06.911 **** 2026-02-04 05:26:02.619436 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:26:02.619447 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:26:02.619456 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:26:02.619466 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:26:02.619476 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:02.619487 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:26:02.619497 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:26:02.619507 | orchestrator | 2026-02-04 05:26:02.619517 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 05:26:02.619527 | orchestrator | Wednesday 04 February 2026 05:25:33 +0000 (0:00:02.511) 0:02:09.422 **** 2026-02-04 05:26:02.619537 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:02.619547 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:02.619557 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:02.619566 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.619576 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:02.619646 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:02.619657 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:02.619666 | orchestrator | 2026-02-04 05:26:02.619676 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 05:26:02.619687 | orchestrator | Wednesday 04 February 2026 05:25:35 +0000 (0:00:01.924) 0:02:11.347 **** 2026-02-04 05:26:02.619738 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:02.619748 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:02.619758 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:02.619768 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.619778 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:02.619800 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:02.619819 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-04 05:26:02.619831 | orchestrator | 2026-02-04 05:26:02.619843 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 05:26:02.619855 | orchestrator | Wednesday 04 February 2026 05:25:38 +0000 (0:00:02.736) 0:02:14.083 **** 2026-02-04 05:26:02.619866 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:02.619878 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:02.619888 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:02.619900 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.619911 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:02.619923 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:02.619934 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:02.619969 | orchestrator | 2026-02-04 05:26:02.619982 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 05:26:02.619993 | orchestrator | Wednesday 04 February 2026 05:25:40 +0000 (0:00:01.985) 0:02:16.069 **** 2026-02-04 05:26:02.620007 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:26:02.620019 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-04 05:26:02.620030 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 05:26:02.620042 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-04 05:26:02.620054 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-04 05:26:02.620065 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-04 05:26:02.620077 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-04 05:26:02.620088 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 05:26:02.620100 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-04 05:26:02.620111 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-04 05:26:02.620123 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-04 05:26:02.620134 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-04 05:26:02.620145 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-04 05:26:02.620156 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-04 05:26:02.620168 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-04 05:26:02.620192 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-04 05:26:02.620202 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-04 05:26:02.620211 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-04 05:26:02.620221 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-04 05:26:02.620231 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-04 05:26:02.620241 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-04 05:26:02.620250 | orchestrator | 2026-02-04 05:26:02.620260 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 05:26:02.620270 | orchestrator | Wednesday 04 February 2026 05:25:43 +0000 (0:00:03.246) 0:02:19.315 **** 2026-02-04 05:26:02.620279 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 05:26:02.620289 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 05:26:02.620299 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 05:26:02.620308 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:02.620318 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-04 05:26:02.620328 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-04 05:26:02.620337 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-04 05:26:02.620347 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:02.620356 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-04 05:26:02.620366 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-04 05:26:02.620375 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-04 05:26:02.620385 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:02.620395 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-04 05:26:02.620421 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-04 05:26:02.620432 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-04 05:26:02.620441 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.620451 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-04 05:26:02.620461 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-04 05:26:02.620470 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-04 05:26:02.620480 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:02.620490 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-04 05:26:02.620506 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-04 05:26:02.620516 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-04 05:26:02.620525 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:02.620535 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-04 05:26:02.620545 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-04 05:26:02.620555 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-04 05:26:02.620564 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:02.620574 | orchestrator | 2026-02-04 05:26:02.620602 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 05:26:02.620612 | orchestrator | Wednesday 04 February 2026 05:25:46 +0000 (0:00:02.441) 0:02:21.757 **** 2026-02-04 05:26:02.620622 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:02.620632 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:02.620641 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:02.620651 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:02.620662 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 05:26:02.620672 | orchestrator | 2026-02-04 05:26:02.620682 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 05:26:02.620693 | orchestrator | Wednesday 04 February 2026 05:25:48 +0000 (0:00:02.264) 0:02:24.022 **** 2026-02-04 05:26:02.620703 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.620712 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:02.620722 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:02.620732 | orchestrator | 2026-02-04 05:26:02.620741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 05:26:02.620751 | orchestrator | Wednesday 04 February 2026 05:25:50 +0000 (0:00:01.684) 0:02:25.707 **** 2026-02-04 05:26:02.620761 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.620771 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:02.620781 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:02.620790 | orchestrator | 2026-02-04 05:26:02.620800 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 05:26:02.620810 | orchestrator | Wednesday 04 February 2026 05:25:51 +0000 (0:00:01.398) 0:02:27.105 **** 2026-02-04 05:26:02.620820 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.620830 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:02.620839 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:02.620849 | orchestrator | 2026-02-04 05:26:02.620859 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 05:26:02.620868 | orchestrator | Wednesday 04 February 2026 05:25:53 +0000 (0:00:01.375) 0:02:28.480 **** 2026-02-04 05:26:02.620878 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:26:02.620888 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:26:02.620898 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:26:02.620907 | orchestrator | 2026-02-04 05:26:02.620917 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 05:26:02.620927 | orchestrator | Wednesday 04 February 2026 05:25:54 +0000 (0:00:01.459) 0:02:29.939 **** 2026-02-04 05:26:02.620937 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 05:26:02.620947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 05:26:02.620956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 05:26:02.620966 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.620975 | orchestrator | 2026-02-04 05:26:02.620990 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 05:26:02.621000 | orchestrator | Wednesday 04 February 2026 05:25:56 +0000 (0:00:01.736) 0:02:31.676 **** 2026-02-04 05:26:02.621010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 05:26:02.621026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 05:26:02.621035 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 05:26:02.621045 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.621055 | orchestrator | 2026-02-04 05:26:02.621065 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 05:26:02.621074 | orchestrator | Wednesday 04 February 2026 05:25:57 +0000 (0:00:01.683) 0:02:33.359 **** 2026-02-04 05:26:02.621084 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-04 05:26:02.621094 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-04 05:26:02.621103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-04 05:26:02.621113 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:02.621123 | orchestrator | 2026-02-04 05:26:02.621133 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 05:26:02.621143 | orchestrator | Wednesday 04 February 2026 05:25:59 +0000 (0:00:01.694) 0:02:35.054 **** 2026-02-04 05:26:02.621152 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:26:02.621162 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:26:02.621172 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:26:02.621181 | orchestrator | 2026-02-04 05:26:02.621191 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 05:26:02.621201 | orchestrator | Wednesday 04 February 2026 05:26:01 +0000 (0:00:01.393) 0:02:36.447 **** 2026-02-04 05:26:02.621211 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-04 05:26:02.621226 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-04 05:26:50.995171 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-04 05:26:50.995256 | orchestrator | 2026-02-04 05:26:50.995264 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 05:26:50.995269 | orchestrator | Wednesday 04 February 2026 05:26:02 +0000 (0:00:01.589) 0:02:38.037 **** 2026-02-04 05:26:50.995274 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:26:50.995279 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:26:50.995284 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:26:50.995288 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 05:26:50.995292 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 05:26:50.995296 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 05:26:50.995300 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 05:26:50.995304 | orchestrator | 2026-02-04 05:26:50.995308 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 05:26:50.995311 | orchestrator | Wednesday 04 February 2026 05:26:04 +0000 (0:00:02.087) 0:02:40.125 **** 2026-02-04 05:26:50.995315 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:26:50.995319 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:26:50.995323 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:26:50.995327 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 05:26:50.995331 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 05:26:50.995335 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 05:26:50.995338 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 05:26:50.995342 | orchestrator | 2026-02-04 05:26:50.995346 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-04 05:26:50.995350 | orchestrator | Wednesday 04 February 2026 05:26:07 +0000 (0:00:03.003) 0:02:43.128 **** 2026-02-04 05:26:50.995369 | orchestrator | changed: [testbed-node-3] 2026-02-04 05:26:50.995373 | orchestrator | changed: [testbed-node-4] 2026-02-04 05:26:50.995376 | orchestrator | changed: [testbed-node-5] 2026-02-04 05:26:50.995380 | orchestrator | changed: [testbed-manager] 2026-02-04 05:26:50.995384 | orchestrator | changed: [testbed-node-2] 2026-02-04 05:26:50.995388 | orchestrator | changed: [testbed-node-1] 2026-02-04 05:26:50.995391 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:26:50.995395 | orchestrator | 2026-02-04 05:26:50.995399 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-04 05:26:50.995403 | orchestrator | Wednesday 04 February 2026 05:26:18 +0000 (0:00:11.258) 0:02:54.386 **** 2026-02-04 05:26:50.995406 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995410 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995414 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995418 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995421 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995425 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995429 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995433 | orchestrator | 2026-02-04 05:26:50.995436 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-04 05:26:50.995440 | orchestrator | Wednesday 04 February 2026 05:26:21 +0000 (0:00:02.269) 0:02:56.656 **** 2026-02-04 05:26:50.995444 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995448 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995451 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995455 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995459 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995462 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995477 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995481 | orchestrator | 2026-02-04 05:26:50.995485 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-04 05:26:50.995488 | orchestrator | Wednesday 04 February 2026 05:26:23 +0000 (0:00:02.090) 0:02:58.746 **** 2026-02-04 05:26:50.995492 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:26:50.995496 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:26:50.995500 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995504 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:26:50.995507 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:26:50.995511 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:26:50.995515 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:26:50.995519 | orchestrator | 2026-02-04 05:26:50.995522 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-04 05:26:50.995526 | orchestrator | Wednesday 04 February 2026 05:26:26 +0000 (0:00:03.298) 0:03:02.045 **** 2026-02-04 05:26:50.995531 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-04 05:26:50.995537 | orchestrator | 2026-02-04 05:26:50.995540 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-04 05:26:50.995544 | orchestrator | Wednesday 04 February 2026 05:26:29 +0000 (0:00:02.729) 0:03:04.774 **** 2026-02-04 05:26:50.995548 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995552 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995556 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995559 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995563 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995567 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995571 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995574 | orchestrator | 2026-02-04 05:26:50.995588 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-04 05:26:50.995593 | orchestrator | Wednesday 04 February 2026 05:26:31 +0000 (0:00:02.250) 0:03:07.024 **** 2026-02-04 05:26:50.995596 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995603 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995607 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995611 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995655 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995660 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995664 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995667 | orchestrator | 2026-02-04 05:26:50.995671 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-04 05:26:50.995675 | orchestrator | Wednesday 04 February 2026 05:26:34 +0000 (0:00:02.482) 0:03:09.507 **** 2026-02-04 05:26:50.995679 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995683 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995686 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995690 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995694 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995697 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995701 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995705 | orchestrator | 2026-02-04 05:26:50.995709 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-04 05:26:50.995712 | orchestrator | Wednesday 04 February 2026 05:26:36 +0000 (0:00:01.966) 0:03:11.474 **** 2026-02-04 05:26:50.995716 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995720 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995724 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995727 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995731 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995735 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995739 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995742 | orchestrator | 2026-02-04 05:26:50.995746 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-04 05:26:50.995751 | orchestrator | Wednesday 04 February 2026 05:26:38 +0000 (0:00:02.118) 0:03:13.593 **** 2026-02-04 05:26:50.995755 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995760 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995764 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995769 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995773 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995778 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995782 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995787 | orchestrator | 2026-02-04 05:26:50.995791 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-04 05:26:50.995795 | orchestrator | Wednesday 04 February 2026 05:26:40 +0000 (0:00:01.955) 0:03:15.548 **** 2026-02-04 05:26:50.995800 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995804 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995809 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995813 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995818 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995822 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995827 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995831 | orchestrator | 2026-02-04 05:26:50.995836 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-04 05:26:50.995840 | orchestrator | Wednesday 04 February 2026 05:26:42 +0000 (0:00:02.201) 0:03:17.750 **** 2026-02-04 05:26:50.995844 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995848 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995853 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995857 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995861 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995866 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995870 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995875 | orchestrator | 2026-02-04 05:26:50.995883 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-04 05:26:50.995887 | orchestrator | Wednesday 04 February 2026 05:26:44 +0000 (0:00:01.852) 0:03:19.603 **** 2026-02-04 05:26:50.995892 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995896 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995900 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995908 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995912 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995917 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995922 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995926 | orchestrator | 2026-02-04 05:26:50.995930 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-04 05:26:50.995935 | orchestrator | Wednesday 04 February 2026 05:26:46 +0000 (0:00:02.201) 0:03:21.805 **** 2026-02-04 05:26:50.995939 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995942 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995946 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995950 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995954 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995957 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995961 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.995965 | orchestrator | 2026-02-04 05:26:50.995969 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-04 05:26:50.995973 | orchestrator | Wednesday 04 February 2026 05:26:48 +0000 (0:00:02.167) 0:03:23.972 **** 2026-02-04 05:26:50.995976 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.995980 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.995984 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:26:50.995988 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:26:50.995991 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:26:50.995995 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:26:50.995999 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:26:50.996003 | orchestrator | 2026-02-04 05:26:50.996006 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-04 05:26:50.996010 | orchestrator | Wednesday 04 February 2026 05:26:50 +0000 (0:00:01.963) 0:03:25.936 **** 2026-02-04 05:26:50.996014 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:26:50.996018 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:26:50.996024 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.224830 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.224917 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.224925 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.224931 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.224936 | orchestrator | 2026-02-04 05:27:17.224942 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-04 05:27:17.224948 | orchestrator | Wednesday 04 February 2026 05:26:52 +0000 (0:00:02.361) 0:03:28.297 **** 2026-02-04 05:27:17.224953 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.224958 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.224963 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.224968 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.224972 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.224977 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.224982 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.224987 | orchestrator | 2026-02-04 05:27:17.224992 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-04 05:27:17.224997 | orchestrator | Wednesday 04 February 2026 05:26:54 +0000 (0:00:02.071) 0:03:30.369 **** 2026-02-04 05:27:17.225002 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.225007 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.225012 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.225017 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 05:27:17.225050 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 05:27:17.225056 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225061 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 05:27:17.225066 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 05:27:17.225071 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225076 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 05:27:17.225080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 05:27:17.225085 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.225090 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.225095 | orchestrator | 2026-02-04 05:27:17.225100 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-04 05:27:17.225104 | orchestrator | Wednesday 04 February 2026 05:26:57 +0000 (0:00:02.451) 0:03:32.821 **** 2026-02-04 05:27:17.225109 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.225114 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.225118 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.225123 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225128 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225133 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.225137 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.225142 | orchestrator | 2026-02-04 05:27:17.225147 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-04 05:27:17.225151 | orchestrator | Wednesday 04 February 2026 05:26:59 +0000 (0:00:01.992) 0:03:34.813 **** 2026-02-04 05:27:17.225156 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.225161 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.225165 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.225170 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225175 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225180 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.225195 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.225201 | orchestrator | 2026-02-04 05:27:17.225205 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-04 05:27:17.225210 | orchestrator | Wednesday 04 February 2026 05:27:01 +0000 (0:00:02.257) 0:03:37.071 **** 2026-02-04 05:27:17.225215 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.225220 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.225224 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.225229 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225234 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225238 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.225243 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.225248 | orchestrator | 2026-02-04 05:27:17.225252 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-04 05:27:17.225257 | orchestrator | Wednesday 04 February 2026 05:27:03 +0000 (0:00:02.113) 0:03:39.185 **** 2026-02-04 05:27:17.225262 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.225267 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.225272 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.225276 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225281 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225291 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.225296 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.225301 | orchestrator | 2026-02-04 05:27:17.225305 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-04 05:27:17.225310 | orchestrator | Wednesday 04 February 2026 05:27:06 +0000 (0:00:02.295) 0:03:41.480 **** 2026-02-04 05:27:17.225315 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.225320 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.225324 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.225329 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225334 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225349 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.225354 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.225359 | orchestrator | 2026-02-04 05:27:17.225363 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-04 05:27:17.225368 | orchestrator | Wednesday 04 February 2026 05:27:08 +0000 (0:00:02.105) 0:03:43.586 **** 2026-02-04 05:27:17.225373 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.225378 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.225382 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.225388 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225393 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225399 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.225404 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.225410 | orchestrator | 2026-02-04 05:27:17.225416 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-04 05:27:17.225421 | orchestrator | Wednesday 04 February 2026 05:27:09 +0000 (0:00:01.844) 0:03:45.431 **** 2026-02-04 05:27:17.225427 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:17.225433 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:17.225438 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:17.225444 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:17.225450 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 05:27:17.225455 | orchestrator | 2026-02-04 05:27:17.225461 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-04 05:27:17.225466 | orchestrator | Wednesday 04 February 2026 05:27:12 +0000 (0:00:02.481) 0:03:47.912 **** 2026-02-04 05:27:17.225472 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:27:17.225478 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:27:17.225484 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:27:17.225489 | orchestrator | 2026-02-04 05:27:17.225495 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-04 05:27:17.225500 | orchestrator | Wednesday 04 February 2026 05:27:13 +0000 (0:00:01.429) 0:03:49.342 **** 2026-02-04 05:27:17.225506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 05:27:17.225512 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 05:27:17.225517 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 05:27:17.225529 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 05:27:17.225535 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225540 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 05:27:17.225546 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 05:27:17.225556 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:17.225562 | orchestrator | 2026-02-04 05:27:17.225567 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-04 05:27:17.225573 | orchestrator | Wednesday 04 February 2026 05:27:15 +0000 (0:00:01.540) 0:03:50.882 **** 2026-02-04 05:27:17.225584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:17.225592 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:17.225598 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:17.225603 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:17.225609 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:17.225614 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:17.225624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:27.186822 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:27.186933 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:27.186952 | orchestrator | 2026-02-04 05:27:27.186965 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-04 05:27:27.186978 | orchestrator | Wednesday 04 February 2026 05:27:17 +0000 (0:00:01.754) 0:03:52.636 **** 2026-02-04 05:27:27.186990 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:27.187002 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:27.187013 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:27.187024 | orchestrator | 2026-02-04 05:27:27.187035 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-04 05:27:27.187046 | orchestrator | Wednesday 04 February 2026 05:27:18 +0000 (0:00:01.397) 0:03:54.033 **** 2026-02-04 05:27:27.187057 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:27.187068 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:27.187079 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:27.187090 | orchestrator | 2026-02-04 05:27:27.187101 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-04 05:27:27.187112 | orchestrator | Wednesday 04 February 2026 05:27:20 +0000 (0:00:01.405) 0:03:55.439 **** 2026-02-04 05:27:27.187123 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:27.187133 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:27.187172 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:27.187184 | orchestrator | 2026-02-04 05:27:27.187195 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-04 05:27:27.187206 | orchestrator | Wednesday 04 February 2026 05:27:21 +0000 (0:00:01.348) 0:03:56.788 **** 2026-02-04 05:27:27.187217 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:27.187228 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:27.187239 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:27.187250 | orchestrator | 2026-02-04 05:27:27.187261 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-04 05:27:27.187272 | orchestrator | Wednesday 04 February 2026 05:27:22 +0000 (0:00:01.381) 0:03:58.169 **** 2026-02-04 05:27:27.187283 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}) 2026-02-04 05:27:27.187296 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}) 2026-02-04 05:27:27.187307 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}) 2026-02-04 05:27:27.187318 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}) 2026-02-04 05:27:27.187329 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}) 2026-02-04 05:27:27.187355 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}) 2026-02-04 05:27:27.187369 | orchestrator | 2026-02-04 05:27:27.187383 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-04 05:27:27.187397 | orchestrator | Wednesday 04 February 2026 05:27:25 +0000 (0:00:02.918) 0:04:01.088 **** 2026-02-04 05:27:27.187416 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-33635451-34dd-546b-bd98-6f515d7d790f/osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 957, 'dev': 6, 'nlink': 1, 'atime': 1770172649.3678493, 'mtime': 1770172649.3638492, 'ctime': 1770172649.3638492, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-33635451-34dd-546b-bd98-6f515d7d790f/osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:27.187455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e/osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 967, 'dev': 6, 'nlink': 1, 'atime': 1770172669.5041528, 'mtime': 1770172669.4991527, 'ctime': 1770172669.4991527, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e/osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:27.187480 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:27.187510 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c/osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1770172647.3786397, 'mtime': 1770172647.3746395, 'ctime': 1770172647.3746395, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c/osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:27.187527 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8a64378d-205e-5817-b815-b641dc764843/osd-block-8a64378d-205e-5817-b815-b641dc764843', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1770172667.408937, 'mtime': 1770172667.4049368, 'ctime': 1770172667.4049368, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8a64378d-205e-5817-b815-b641dc764843/osd-block-8a64378d-205e-5817-b815-b641dc764843', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:27.187542 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:27.187564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af/osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 951, 'dev': 6, 'nlink': 1, 'atime': 1770172646.8478582, 'mtime': 1770172646.841858, 'ctime': 1770172646.841858, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af/osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.284742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-43734a2f-bb9f-5443-b704-3f4971f68639/osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 961, 'dev': 6, 'nlink': 1, 'atime': 1770172664.7721288, 'mtime': 1770172664.7651286, 'ctime': 1770172664.7651286, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-43734a2f-bb9f-5443-b704-3f4971f68639/osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.284876 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:33.284896 | orchestrator | 2026-02-04 05:27:33.284928 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-04 05:27:33.284981 | orchestrator | Wednesday 04 February 2026 05:27:27 +0000 (0:00:01.520) 0:04:02.609 **** 2026-02-04 05:27:33.284996 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 05:27:33.285010 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 05:27:33.285021 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:33.285033 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 05:27:33.285044 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 05:27:33.285055 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:33.285066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 05:27:33.285078 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 05:27:33.285089 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:33.285100 | orchestrator | 2026-02-04 05:27:33.285111 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-04 05:27:33.285123 | orchestrator | Wednesday 04 February 2026 05:27:28 +0000 (0:00:01.431) 0:04:04.040 **** 2026-02-04 05:27:33.285157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285182 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:33.285193 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285236 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:33.285250 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285264 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285276 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:33.285289 | orchestrator | 2026-02-04 05:27:33.285302 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-04 05:27:33.285315 | orchestrator | Wednesday 04 February 2026 05:27:30 +0000 (0:00:01.478) 0:04:05.518 **** 2026-02-04 05:27:33.285329 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'})  2026-02-04 05:27:33.285342 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'})  2026-02-04 05:27:33.285354 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:33.285373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'})  2026-02-04 05:27:33.285387 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'})  2026-02-04 05:27:33.285399 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:33.285413 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'})  2026-02-04 05:27:33.285426 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'})  2026-02-04 05:27:33.285438 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:33.285451 | orchestrator | 2026-02-04 05:27:33.285464 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-04 05:27:33.285485 | orchestrator | Wednesday 04 February 2026 05:27:31 +0000 (0:00:01.636) 0:04:07.155 **** 2026-02-04 05:27:33.285499 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-33635451-34dd-546b-bd98-6f515d7d790f', 'data_vg': 'ceph-33635451-34dd-546b-bd98-6f515d7d790f'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285514 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f6bda8a0-a04e-51a6-8ac1-652b1721251e', 'data_vg': 'ceph-f6bda8a0-a04e-51a6-8ac1-652b1721251e'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285526 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:33.285540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-f48ca6a8-b497-5c65-8a3b-569ec358ef4c', 'data_vg': 'ceph-f48ca6a8-b497-5c65-8a3b-569ec358ef4c'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285554 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8a64378d-205e-5817-b815-b641dc764843', 'data_vg': 'ceph-8a64378d-205e-5817-b815-b641dc764843'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285565 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:33.285576 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af', 'data_vg': 'ceph-7ab9afb0-5bc3-5f2a-af50-46dbad87a4af'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:33.285594 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-43734a2f-bb9f-5443-b704-3f4971f68639', 'data_vg': 'ceph-43734a2f-bb9f-5443-b704-3f4971f68639'}, 'ansible_loop_var': 'item'})  2026-02-04 05:27:42.993644 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:42.993762 | orchestrator | 2026-02-04 05:27:42.993772 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-04 05:27:42.993780 | orchestrator | Wednesday 04 February 2026 05:27:33 +0000 (0:00:01.545) 0:04:08.700 **** 2026-02-04 05:27:42.993787 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:42.993793 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:42.993799 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:42.993805 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:42.993811 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:42.993817 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:42.993823 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:42.993829 | orchestrator | 2026-02-04 05:27:42.993835 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-04 05:27:42.993841 | orchestrator | Wednesday 04 February 2026 05:27:35 +0000 (0:00:01.954) 0:04:10.655 **** 2026-02-04 05:27:42.993847 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:42.993853 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:42.993859 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:42.993865 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:42.993871 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-04 05:27:42.993877 | orchestrator | 2026-02-04 05:27:42.993883 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-04 05:27:42.993889 | orchestrator | Wednesday 04 February 2026 05:27:37 +0000 (0:00:02.715) 0:04:13.371 **** 2026-02-04 05:27:42.993924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993933 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993939 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993945 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993957 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:42.993963 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993968 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993974 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993980 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993986 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.993991 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:42.993997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994003 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994070 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:42.994076 | orchestrator | 2026-02-04 05:27:42.994082 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-04 05:27:42.994088 | orchestrator | Wednesday 04 February 2026 05:27:39 +0000 (0:00:01.440) 0:04:14.812 **** 2026-02-04 05:27:42.994094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994100 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994111 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994137 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:42.994153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994196 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:42.994202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994208 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994241 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994248 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:42.994255 | orchestrator | 2026-02-04 05:27:42.994262 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-04 05:27:42.994269 | orchestrator | Wednesday 04 February 2026 05:27:41 +0000 (0:00:01.706) 0:04:16.519 **** 2026-02-04 05:27:42.994276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994297 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994310 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:42.994317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994338 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994345 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994351 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:42.994358 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-04 05:27:42.994398 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:42.994405 | orchestrator | 2026-02-04 05:27:42.994412 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-04 05:27:42.994418 | orchestrator | Wednesday 04 February 2026 05:27:42 +0000 (0:00:01.422) 0:04:17.941 **** 2026-02-04 05:27:42.994425 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:42.994432 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:42.994443 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:58.255769 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:58.255883 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:58.255899 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:58.255910 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:58.255922 | orchestrator | 2026-02-04 05:27:58.255935 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-04 05:27:58.255947 | orchestrator | Wednesday 04 February 2026 05:27:44 +0000 (0:00:01.861) 0:04:19.803 **** 2026-02-04 05:27:58.255958 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:58.255969 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:58.255980 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:58.255991 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:58.256002 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:58.256013 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:58.256024 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:58.256035 | orchestrator | 2026-02-04 05:27:58.256046 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-04 05:27:58.256057 | orchestrator | Wednesday 04 February 2026 05:27:46 +0000 (0:00:02.248) 0:04:22.052 **** 2026-02-04 05:27:58.256068 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:58.256078 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:58.256089 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:58.256100 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:58.256111 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:58.256121 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:58.256132 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:58.256143 | orchestrator | 2026-02-04 05:27:58.256154 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-04 05:27:58.256181 | orchestrator | Wednesday 04 February 2026 05:27:48 +0000 (0:00:02.179) 0:04:24.231 **** 2026-02-04 05:27:58.256193 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:58.256203 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:58.256214 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:58.256225 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:58.256235 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:58.256246 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:58.256257 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:58.256270 | orchestrator | 2026-02-04 05:27:58.256284 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-04 05:27:58.256298 | orchestrator | Wednesday 04 February 2026 05:27:50 +0000 (0:00:01.877) 0:04:26.109 **** 2026-02-04 05:27:58.256312 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:58.256325 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:58.256338 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:58.256350 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:58.256363 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:58.256375 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:58.256388 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:58.256401 | orchestrator | 2026-02-04 05:27:58.256433 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-04 05:27:58.256447 | orchestrator | Wednesday 04 February 2026 05:27:52 +0000 (0:00:02.088) 0:04:28.197 **** 2026-02-04 05:27:58.256461 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:58.256472 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:58.256485 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:58.256497 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:58.256510 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:58.256522 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:58.256535 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:58.256547 | orchestrator | 2026-02-04 05:27:58.256560 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-04 05:27:58.256573 | orchestrator | Wednesday 04 February 2026 05:27:54 +0000 (0:00:02.061) 0:04:30.259 **** 2026-02-04 05:27:58.256586 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:58.256598 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:58.256612 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:58.256624 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:27:58.256634 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:27:58.256645 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:27:58.256655 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:27:58.256685 | orchestrator | 2026-02-04 05:27:58.256696 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-04 05:27:58.256707 | orchestrator | Wednesday 04 February 2026 05:27:57 +0000 (0:00:02.447) 0:04:32.706 **** 2026-02-04 05:27:58.256719 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:27:58.256732 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:27:58.256745 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:27:58.256757 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:27:58.256768 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:27:58.256781 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:27:58.256792 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:27:58.256820 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:27:58.256832 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:27:58.256843 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:27:58.256854 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:27:58.256865 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:27:58.256884 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:27:58.256895 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:27:58.256911 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:27:58.256922 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:27:58.256933 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:27:58.256944 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:27:58.256955 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:27:58.256966 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:27:58.256977 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:27:58.256987 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:27:58.256998 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:27:58.257009 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:27:58.257020 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:27:58.257031 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:27:58.257042 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:27:58.257053 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:27:58.257064 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:27:58.257074 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:27:58.257091 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:02.979283 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:02.979368 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:02.979380 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:02.979435 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:02.979449 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:02.979462 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:02.979475 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:02.979501 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:02.979510 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:02.979519 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:02.979526 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:02.979534 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:02.979541 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:02.979548 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:02.979556 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:02.979563 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:02.979570 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:02.979577 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:02.979585 | orchestrator | 2026-02-04 05:28:02.979593 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-04 05:28:02.979603 | orchestrator | Wednesday 04 February 2026 05:27:59 +0000 (0:00:02.338) 0:04:35.045 **** 2026-02-04 05:28:02.979615 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:02.979626 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:02.979637 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:02.979648 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:02.979660 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:02.979730 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:02.979746 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:02.979758 | orchestrator | 2026-02-04 05:28:02.979771 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-04 05:28:02.979784 | orchestrator | Wednesday 04 February 2026 05:28:01 +0000 (0:00:02.293) 0:04:37.338 **** 2026-02-04 05:28:02.979792 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:02.979808 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:02.979815 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:02.979823 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:02.979847 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:02.979856 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:02.979864 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:02.979873 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:02.979881 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:02.979890 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:02.979899 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:02.979913 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:02.979923 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:02.979931 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:02.979939 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:02.979948 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:02.979957 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:02.979967 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:02.979977 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:02.979986 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:02.979996 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:02.980006 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:02.980022 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:02.980033 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:02.980043 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:02.980053 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:02.980064 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:02.980075 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:02.980092 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:33.295474 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:33.295590 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:33.295608 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:33.295621 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:33.295633 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:33.295646 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:33.295673 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:33.295686 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:33.295749 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:33.295761 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:33.295772 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-04 05:28:33.295783 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:33.295794 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-04 05:28:33.295805 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-04 05:28:33.295837 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-04 05:28:33.295848 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-04 05:28:33.295859 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:33.295869 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:33.295880 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-04 05:28:33.295891 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:33.295902 | orchestrator | 2026-02-04 05:28:33.295914 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-04 05:28:33.295926 | orchestrator | Wednesday 04 February 2026 05:28:04 +0000 (0:00:02.355) 0:04:39.693 **** 2026-02-04 05:28:33.295937 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:33.295948 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:33.295958 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:33.295969 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:33.295979 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:33.295990 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:33.296001 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:33.296014 | orchestrator | 2026-02-04 05:28:33.296027 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-04 05:28:33.296040 | orchestrator | Wednesday 04 February 2026 05:28:06 +0000 (0:00:02.164) 0:04:41.857 **** 2026-02-04 05:28:33.296053 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:33.296066 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:33.296079 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:33.296091 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:33.296104 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:33.296116 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:33.296129 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:33.296142 | orchestrator | 2026-02-04 05:28:33.296155 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-04 05:28:33.296186 | orchestrator | Wednesday 04 February 2026 05:28:08 +0000 (0:00:02.100) 0:04:43.958 **** 2026-02-04 05:28:33.296199 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:33.296211 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:33.296224 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:33.296236 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:33.296248 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:33.296261 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:33.296272 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:33.296285 | orchestrator | 2026-02-04 05:28:33.296297 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-04 05:28:33.296309 | orchestrator | Wednesday 04 February 2026 05:28:10 +0000 (0:00:02.358) 0:04:46.317 **** 2026-02-04 05:28:33.296323 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-04 05:28:33.296337 | orchestrator | 2026-02-04 05:28:33.296351 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-04 05:28:33.296363 | orchestrator | Wednesday 04 February 2026 05:28:13 +0000 (0:00:02.779) 0:04:49.097 **** 2026-02-04 05:28:33.296374 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 05:28:33.296398 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 05:28:33.296409 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 05:28:33.296432 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 05:28:33.296443 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 05:28:33.296454 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 05:28:33.296465 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-04 05:28:33.296497 | orchestrator | 2026-02-04 05:28:33.296508 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-04 05:28:33.296519 | orchestrator | Wednesday 04 February 2026 05:28:15 +0000 (0:00:02.174) 0:04:51.272 **** 2026-02-04 05:28:33.296530 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:33.296540 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:33.296551 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:33.296562 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:33.296573 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:33.296584 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:33.296594 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:33.296605 | orchestrator | 2026-02-04 05:28:33.296615 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-04 05:28:33.296626 | orchestrator | Wednesday 04 February 2026 05:28:18 +0000 (0:00:02.337) 0:04:53.609 **** 2026-02-04 05:28:33.296637 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:33.296647 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:33.296658 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:33.296669 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:33.296679 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:33.296690 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:33.296741 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:33.296752 | orchestrator | 2026-02-04 05:28:33.296763 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-04 05:28:33.296774 | orchestrator | Wednesday 04 February 2026 05:28:20 +0000 (0:00:02.088) 0:04:55.697 **** 2026-02-04 05:28:33.296785 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:28:33.296796 | orchestrator | ok: [testbed-node-1] 2026-02-04 05:28:33.296807 | orchestrator | ok: [testbed-node-2] 2026-02-04 05:28:33.296817 | orchestrator | ok: [testbed-node-3] 2026-02-04 05:28:33.296828 | orchestrator | ok: [testbed-node-4] 2026-02-04 05:28:33.296838 | orchestrator | ok: [testbed-node-5] 2026-02-04 05:28:33.296849 | orchestrator | ok: [testbed-manager] 2026-02-04 05:28:33.296860 | orchestrator | 2026-02-04 05:28:33.296871 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-04 05:28:33.296881 | orchestrator | Wednesday 04 February 2026 05:28:22 +0000 (0:00:02.592) 0:04:58.290 **** 2026-02-04 05:28:33.296892 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:33.296903 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:33.296914 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:33.296924 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:33.296935 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:33.296945 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:33.296956 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:33.296967 | orchestrator | 2026-02-04 05:28:33.296978 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-04 05:28:33.296989 | orchestrator | Wednesday 04 February 2026 05:28:25 +0000 (0:00:02.435) 0:05:00.726 **** 2026-02-04 05:28:33.296999 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:33.297010 | orchestrator | skipping: [testbed-node-1] 2026-02-04 05:28:33.297021 | orchestrator | skipping: [testbed-node-2] 2026-02-04 05:28:33.297039 | orchestrator | skipping: [testbed-node-3] 2026-02-04 05:28:33.297050 | orchestrator | skipping: [testbed-node-4] 2026-02-04 05:28:33.297060 | orchestrator | skipping: [testbed-node-5] 2026-02-04 05:28:33.297071 | orchestrator | skipping: [testbed-manager] 2026-02-04 05:28:33.297082 | orchestrator | 2026-02-04 05:28:33.297092 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-04 05:28:33.297103 | orchestrator | Wednesday 04 February 2026 05:28:27 +0000 (0:00:02.492) 0:05:03.218 **** 2026-02-04 05:28:33.297114 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:28:33.297125 | orchestrator | 2026-02-04 05:28:33.297135 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-04 05:28:33.297146 | orchestrator | Wednesday 04 February 2026 05:28:30 +0000 (0:00:02.746) 0:05:05.965 **** 2026-02-04 05:28:33.297157 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:28:33.297168 | orchestrator | 2026-02-04 05:28:33.297186 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-04 05:29:13.359039 | orchestrator | 2026-02-04 05:29:13.359190 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 05:29:13.359209 | orchestrator | Wednesday 04 February 2026 05:28:33 +0000 (0:00:02.748) 0:05:08.713 **** 2026-02-04 05:29:13.359221 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359233 | orchestrator | 2026-02-04 05:29:13.359245 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 05:29:13.359256 | orchestrator | Wednesday 04 February 2026 05:28:34 +0000 (0:00:01.477) 0:05:10.191 **** 2026-02-04 05:29:13.359268 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359278 | orchestrator | 2026-02-04 05:29:13.359289 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-04 05:29:13.359301 | orchestrator | Wednesday 04 February 2026 05:28:35 +0000 (0:00:01.142) 0:05:11.333 **** 2026-02-04 05:29:13.359314 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-04 05:29:13.359345 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-04 05:29:13.359358 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-04 05:29:13.359370 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-04 05:29:13.359384 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-04 05:29:13.359396 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}])  2026-02-04 05:29:13.359432 | orchestrator | 2026-02-04 05:29:13.359444 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-04 05:29:13.359454 | orchestrator | 2026-02-04 05:29:13.359465 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-04 05:29:13.359476 | orchestrator | Wednesday 04 February 2026 05:28:46 +0000 (0:00:10.493) 0:05:21.827 **** 2026-02-04 05:29:13.359487 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359498 | orchestrator | 2026-02-04 05:29:13.359508 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-04 05:29:13.359519 | orchestrator | Wednesday 04 February 2026 05:28:47 +0000 (0:00:01.492) 0:05:23.319 **** 2026-02-04 05:29:13.359530 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359541 | orchestrator | 2026-02-04 05:29:13.359552 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-04 05:29:13.359562 | orchestrator | Wednesday 04 February 2026 05:28:49 +0000 (0:00:01.150) 0:05:24.470 **** 2026-02-04 05:29:13.359573 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:13.359585 | orchestrator | 2026-02-04 05:29:13.359595 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-04 05:29:13.359606 | orchestrator | Wednesday 04 February 2026 05:28:50 +0000 (0:00:01.141) 0:05:25.612 **** 2026-02-04 05:29:13.359616 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359627 | orchestrator | 2026-02-04 05:29:13.359638 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-04 05:29:13.359648 | orchestrator | Wednesday 04 February 2026 05:28:51 +0000 (0:00:01.157) 0:05:26.770 **** 2026-02-04 05:29:13.359659 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-04 05:29:13.359670 | orchestrator | 2026-02-04 05:29:13.359680 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-04 05:29:13.359707 | orchestrator | Wednesday 04 February 2026 05:28:52 +0000 (0:00:01.201) 0:05:27.971 **** 2026-02-04 05:29:13.359719 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359756 | orchestrator | 2026-02-04 05:29:13.359768 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-04 05:29:13.359778 | orchestrator | Wednesday 04 February 2026 05:28:53 +0000 (0:00:01.442) 0:05:29.414 **** 2026-02-04 05:29:13.359789 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359800 | orchestrator | 2026-02-04 05:29:13.359811 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-04 05:29:13.359822 | orchestrator | Wednesday 04 February 2026 05:28:55 +0000 (0:00:01.199) 0:05:30.613 **** 2026-02-04 05:29:13.359833 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359844 | orchestrator | 2026-02-04 05:29:13.359855 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-04 05:29:13.359866 | orchestrator | Wednesday 04 February 2026 05:28:56 +0000 (0:00:01.445) 0:05:32.059 **** 2026-02-04 05:29:13.359876 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359887 | orchestrator | 2026-02-04 05:29:13.359898 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-04 05:29:13.359909 | orchestrator | Wednesday 04 February 2026 05:28:57 +0000 (0:00:01.197) 0:05:33.257 **** 2026-02-04 05:29:13.359920 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359931 | orchestrator | 2026-02-04 05:29:13.359942 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-04 05:29:13.359959 | orchestrator | Wednesday 04 February 2026 05:28:58 +0000 (0:00:01.163) 0:05:34.420 **** 2026-02-04 05:29:13.359970 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.359980 | orchestrator | 2026-02-04 05:29:13.359991 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-04 05:29:13.360003 | orchestrator | Wednesday 04 February 2026 05:29:00 +0000 (0:00:01.274) 0:05:35.694 **** 2026-02-04 05:29:13.360023 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:13.360034 | orchestrator | 2026-02-04 05:29:13.360045 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-04 05:29:13.360056 | orchestrator | Wednesday 04 February 2026 05:29:01 +0000 (0:00:01.182) 0:05:36.877 **** 2026-02-04 05:29:13.360067 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.360077 | orchestrator | 2026-02-04 05:29:13.360088 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-04 05:29:13.360099 | orchestrator | Wednesday 04 February 2026 05:29:02 +0000 (0:00:01.148) 0:05:38.026 **** 2026-02-04 05:29:13.360110 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:29:13.360122 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:29:13.360133 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:29:13.360144 | orchestrator | 2026-02-04 05:29:13.360154 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-04 05:29:13.360166 | orchestrator | Wednesday 04 February 2026 05:29:04 +0000 (0:00:01.660) 0:05:39.686 **** 2026-02-04 05:29:13.360177 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:13.360187 | orchestrator | 2026-02-04 05:29:13.360198 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-04 05:29:13.360209 | orchestrator | Wednesday 04 February 2026 05:29:05 +0000 (0:00:01.273) 0:05:40.959 **** 2026-02-04 05:29:13.360219 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:29:13.360230 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:29:13.360241 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:29:13.360252 | orchestrator | 2026-02-04 05:29:13.360263 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-04 05:29:13.360274 | orchestrator | Wednesday 04 February 2026 05:29:08 +0000 (0:00:03.265) 0:05:44.225 **** 2026-02-04 05:29:13.360285 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 05:29:13.360296 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 05:29:13.360307 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 05:29:13.360317 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:13.360328 | orchestrator | 2026-02-04 05:29:13.360344 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-04 05:29:13.360363 | orchestrator | Wednesday 04 February 2026 05:29:10 +0000 (0:00:01.444) 0:05:45.670 **** 2026-02-04 05:29:13.360385 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-04 05:29:13.360406 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-04 05:29:13.360425 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-04 05:29:13.360443 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:13.360462 | orchestrator | 2026-02-04 05:29:13.360481 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-04 05:29:13.360501 | orchestrator | Wednesday 04 February 2026 05:29:12 +0000 (0:00:01.937) 0:05:47.607 **** 2026-02-04 05:29:13.360532 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:33.804351 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:33.804519 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:33.804543 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.804558 | orchestrator | 2026-02-04 05:29:33.804570 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-04 05:29:33.804582 | orchestrator | Wednesday 04 February 2026 05:29:13 +0000 (0:00:01.169) 0:05:48.777 **** 2026-02-04 05:29:33.804595 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'f8b4daebdb0f', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-04 05:29:06.099525', 'end': '2026-02-04 05:29:06.154149', 'delta': '0:00:00.054624', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f8b4daebdb0f'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-04 05:29:33.804609 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e8207b686900', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-04 05:29:06.692946', 'end': '2026-02-04 05:29:06.746375', 'delta': '0:00:00.053429', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e8207b686900'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-04 05:29:33.804621 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'c48be97cec44', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-04 05:29:07.555734', 'end': '2026-02-04 05:29:07.614775', 'delta': '0:00:00.059041', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['c48be97cec44'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-04 05:29:33.804633 | orchestrator | 2026-02-04 05:29:33.804644 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-04 05:29:33.804655 | orchestrator | Wednesday 04 February 2026 05:29:14 +0000 (0:00:01.181) 0:05:49.958 **** 2026-02-04 05:29:33.804688 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:33.804700 | orchestrator | 2026-02-04 05:29:33.804711 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-04 05:29:33.804722 | orchestrator | Wednesday 04 February 2026 05:29:16 +0000 (0:00:01.678) 0:05:51.637 **** 2026-02-04 05:29:33.804732 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.804836 | orchestrator | 2026-02-04 05:29:33.804848 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-04 05:29:33.804859 | orchestrator | Wednesday 04 February 2026 05:29:17 +0000 (0:00:01.281) 0:05:52.919 **** 2026-02-04 05:29:33.804870 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:33.804882 | orchestrator | 2026-02-04 05:29:33.804895 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-04 05:29:33.804908 | orchestrator | Wednesday 04 February 2026 05:29:18 +0000 (0:00:01.132) 0:05:54.051 **** 2026-02-04 05:29:33.804940 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-04 05:29:33.804953 | orchestrator | 2026-02-04 05:29:33.804967 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 05:29:33.804979 | orchestrator | Wednesday 04 February 2026 05:29:20 +0000 (0:00:02.087) 0:05:56.138 **** 2026-02-04 05:29:33.804992 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:33.805004 | orchestrator | 2026-02-04 05:29:33.805017 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-04 05:29:33.805029 | orchestrator | Wednesday 04 February 2026 05:29:21 +0000 (0:00:01.184) 0:05:57.324 **** 2026-02-04 05:29:33.805042 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805055 | orchestrator | 2026-02-04 05:29:33.805067 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-04 05:29:33.805080 | orchestrator | Wednesday 04 February 2026 05:29:23 +0000 (0:00:01.212) 0:05:58.536 **** 2026-02-04 05:29:33.805093 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805106 | orchestrator | 2026-02-04 05:29:33.805117 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-04 05:29:33.805135 | orchestrator | Wednesday 04 February 2026 05:29:24 +0000 (0:00:01.229) 0:05:59.766 **** 2026-02-04 05:29:33.805146 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805157 | orchestrator | 2026-02-04 05:29:33.805168 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-04 05:29:33.805179 | orchestrator | Wednesday 04 February 2026 05:29:25 +0000 (0:00:01.124) 0:06:00.890 **** 2026-02-04 05:29:33.805190 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805201 | orchestrator | 2026-02-04 05:29:33.805212 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-04 05:29:33.805223 | orchestrator | Wednesday 04 February 2026 05:29:26 +0000 (0:00:01.155) 0:06:02.046 **** 2026-02-04 05:29:33.805234 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805245 | orchestrator | 2026-02-04 05:29:33.805255 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-04 05:29:33.805267 | orchestrator | Wednesday 04 February 2026 05:29:27 +0000 (0:00:01.223) 0:06:03.270 **** 2026-02-04 05:29:33.805277 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805288 | orchestrator | 2026-02-04 05:29:33.805299 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-04 05:29:33.805310 | orchestrator | Wednesday 04 February 2026 05:29:28 +0000 (0:00:01.145) 0:06:04.416 **** 2026-02-04 05:29:33.805321 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805332 | orchestrator | 2026-02-04 05:29:33.805343 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-04 05:29:33.805354 | orchestrator | Wednesday 04 February 2026 05:29:30 +0000 (0:00:01.165) 0:06:05.581 **** 2026-02-04 05:29:33.805365 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805375 | orchestrator | 2026-02-04 05:29:33.805386 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-04 05:29:33.805398 | orchestrator | Wednesday 04 February 2026 05:29:31 +0000 (0:00:01.177) 0:06:06.759 **** 2026-02-04 05:29:33.805419 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:33.805430 | orchestrator | 2026-02-04 05:29:33.805441 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-04 05:29:33.805451 | orchestrator | Wednesday 04 February 2026 05:29:32 +0000 (0:00:01.154) 0:06:07.914 **** 2026-02-04 05:29:33.805463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:29:33.805475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:29:33.805487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:29:33.805499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-04 05:29:33.805520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:29:35.063500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:29:35.063606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:29:35.063629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5c0a15c2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-04 05:29:35.063672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:29:35.063685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-04 05:29:35.063696 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:35.063709 | orchestrator | 2026-02-04 05:29:35.063721 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-04 05:29:35.063733 | orchestrator | Wednesday 04 February 2026 05:29:33 +0000 (0:00:01.300) 0:06:09.214 **** 2026-02-04 05:29:35.063807 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:35.063822 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:35.063842 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:35.063855 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-04-01-20-05-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:35.063899 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:35.063912 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:35.063933 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:59.107941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5c0a15c2', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_5c0a15c2-b328-40df-8b11-eca46f34c8bf-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:59.108113 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:59.108148 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-04 05:29:59.108171 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:59.108195 | orchestrator | 2026-02-04 05:29:59.108212 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-04 05:29:59.108224 | orchestrator | Wednesday 04 February 2026 05:29:35 +0000 (0:00:01.265) 0:06:10.480 **** 2026-02-04 05:29:59.108235 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:59.108247 | orchestrator | 2026-02-04 05:29:59.108258 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-04 05:29:59.108269 | orchestrator | Wednesday 04 February 2026 05:29:36 +0000 (0:00:01.544) 0:06:12.025 **** 2026-02-04 05:29:59.108280 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:59.108290 | orchestrator | 2026-02-04 05:29:59.108301 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 05:29:59.108340 | orchestrator | Wednesday 04 February 2026 05:29:37 +0000 (0:00:01.136) 0:06:13.161 **** 2026-02-04 05:29:59.108353 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:29:59.108374 | orchestrator | 2026-02-04 05:29:59.108388 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 05:29:59.108401 | orchestrator | Wednesday 04 February 2026 05:29:39 +0000 (0:00:01.492) 0:06:14.654 **** 2026-02-04 05:29:59.108420 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:59.108440 | orchestrator | 2026-02-04 05:29:59.108458 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-04 05:29:59.108477 | orchestrator | Wednesday 04 February 2026 05:29:40 +0000 (0:00:01.149) 0:06:15.804 **** 2026-02-04 05:29:59.108495 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:59.108514 | orchestrator | 2026-02-04 05:29:59.108533 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-04 05:29:59.108551 | orchestrator | Wednesday 04 February 2026 05:29:41 +0000 (0:00:01.246) 0:06:17.050 **** 2026-02-04 05:29:59.108567 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:59.108578 | orchestrator | 2026-02-04 05:29:59.108589 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 05:29:59.108599 | orchestrator | Wednesday 04 February 2026 05:29:42 +0000 (0:00:01.160) 0:06:18.211 **** 2026-02-04 05:29:59.108610 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:29:59.108622 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 05:29:59.108632 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 05:29:59.108643 | orchestrator | 2026-02-04 05:29:59.108654 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 05:29:59.108665 | orchestrator | Wednesday 04 February 2026 05:29:44 +0000 (0:00:01.982) 0:06:20.194 **** 2026-02-04 05:29:59.108675 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 05:29:59.108687 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 05:29:59.108698 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 05:29:59.108709 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:59.108738 | orchestrator | 2026-02-04 05:29:59.108787 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-04 05:29:59.108807 | orchestrator | Wednesday 04 February 2026 05:29:45 +0000 (0:00:01.234) 0:06:21.428 **** 2026-02-04 05:29:59.108825 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:59.108843 | orchestrator | 2026-02-04 05:29:59.108861 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-04 05:29:59.108880 | orchestrator | Wednesday 04 February 2026 05:29:47 +0000 (0:00:01.138) 0:06:22.567 **** 2026-02-04 05:29:59.108900 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:29:59.108918 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:29:59.108935 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:29:59.108946 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 05:29:59.108957 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 05:29:59.108967 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 05:29:59.108978 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 05:29:59.108989 | orchestrator | 2026-02-04 05:29:59.109000 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-04 05:29:59.109010 | orchestrator | Wednesday 04 February 2026 05:29:49 +0000 (0:00:02.134) 0:06:24.702 **** 2026-02-04 05:29:59.109021 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:29:59.109032 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:29:59.109042 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:29:59.109053 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-04 05:29:59.109073 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-04 05:29:59.109084 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-04 05:29:59.109095 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-04 05:29:59.109105 | orchestrator | 2026-02-04 05:29:59.109116 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-04 05:29:59.109127 | orchestrator | Wednesday 04 February 2026 05:29:52 +0000 (0:00:02.951) 0:06:27.653 **** 2026-02-04 05:29:59.109138 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-04 05:29:59.109148 | orchestrator | 2026-02-04 05:29:59.109166 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-04 05:29:59.109184 | orchestrator | Wednesday 04 February 2026 05:29:54 +0000 (0:00:02.185) 0:06:29.839 **** 2026-02-04 05:29:59.109202 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:59.109221 | orchestrator | 2026-02-04 05:29:59.109239 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-04 05:29:59.109259 | orchestrator | Wednesday 04 February 2026 05:29:55 +0000 (0:00:01.291) 0:06:31.130 **** 2026-02-04 05:29:59.109278 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:29:59.109297 | orchestrator | 2026-02-04 05:29:59.109309 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-04 05:29:59.109320 | orchestrator | Wednesday 04 February 2026 05:29:56 +0000 (0:00:01.174) 0:06:32.305 **** 2026-02-04 05:29:59.109331 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-04 05:29:59.109341 | orchestrator | 2026-02-04 05:29:59.109352 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-04 05:29:59.109379 | orchestrator | Wednesday 04 February 2026 05:29:59 +0000 (0:00:02.217) 0:06:34.523 **** 2026-02-04 05:31:02.382099 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.382243 | orchestrator | 2026-02-04 05:31:02.382275 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-04 05:31:02.382299 | orchestrator | Wednesday 04 February 2026 05:30:00 +0000 (0:00:01.160) 0:06:35.683 **** 2026-02-04 05:31:02.382320 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:31:02.382336 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-04 05:31:02.382349 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-04 05:31:02.382360 | orchestrator | 2026-02-04 05:31:02.382371 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-04 05:31:02.382382 | orchestrator | Wednesday 04 February 2026 05:30:02 +0000 (0:00:02.738) 0:06:38.421 **** 2026-02-04 05:31:02.382393 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-04 05:31:02.382404 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-04 05:31:02.382415 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-04 05:31:02.382426 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-04 05:31:02.382437 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-04 05:31:02.382448 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-04 05:31:02.382459 | orchestrator | 2026-02-04 05:31:02.382470 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-04 05:31:02.382483 | orchestrator | Wednesday 04 February 2026 05:30:16 +0000 (0:00:13.406) 0:06:51.828 **** 2026-02-04 05:31:02.382501 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:31:02.382520 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:31:02.382569 | orchestrator | 2026-02-04 05:31:02.382591 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-04 05:31:02.382610 | orchestrator | Wednesday 04 February 2026 05:30:20 +0000 (0:00:04.096) 0:06:55.925 **** 2026-02-04 05:31:02.382630 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:31:02.382649 | orchestrator | 2026-02-04 05:31:02.382669 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-04 05:31:02.382689 | orchestrator | Wednesday 04 February 2026 05:30:24 +0000 (0:00:03.524) 0:06:59.449 **** 2026-02-04 05:31:02.382710 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-04 05:31:02.382730 | orchestrator | 2026-02-04 05:31:02.382756 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-04 05:31:02.382776 | orchestrator | Wednesday 04 February 2026 05:30:25 +0000 (0:00:01.430) 0:07:00.880 **** 2026-02-04 05:31:02.382794 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-04 05:31:02.382916 | orchestrator | 2026-02-04 05:31:02.382938 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-04 05:31:02.382957 | orchestrator | Wednesday 04 February 2026 05:30:27 +0000 (0:00:01.621) 0:07:02.503 **** 2026-02-04 05:31:02.382976 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:02.382995 | orchestrator | 2026-02-04 05:31:02.383014 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-04 05:31:02.383032 | orchestrator | Wednesday 04 February 2026 05:30:28 +0000 (0:00:01.597) 0:07:04.100 **** 2026-02-04 05:31:02.383050 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383067 | orchestrator | 2026-02-04 05:31:02.383088 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-04 05:31:02.383107 | orchestrator | Wednesday 04 February 2026 05:30:29 +0000 (0:00:01.174) 0:07:05.275 **** 2026-02-04 05:31:02.383125 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383146 | orchestrator | 2026-02-04 05:31:02.383165 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-04 05:31:02.383185 | orchestrator | Wednesday 04 February 2026 05:30:30 +0000 (0:00:01.152) 0:07:06.427 **** 2026-02-04 05:31:02.383205 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383227 | orchestrator | 2026-02-04 05:31:02.383246 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-04 05:31:02.383265 | orchestrator | Wednesday 04 February 2026 05:30:32 +0000 (0:00:01.176) 0:07:07.603 **** 2026-02-04 05:31:02.383284 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:02.383304 | orchestrator | 2026-02-04 05:31:02.383324 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-04 05:31:02.383342 | orchestrator | Wednesday 04 February 2026 05:30:33 +0000 (0:00:01.588) 0:07:09.192 **** 2026-02-04 05:31:02.383361 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383379 | orchestrator | 2026-02-04 05:31:02.383397 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-04 05:31:02.383416 | orchestrator | Wednesday 04 February 2026 05:30:34 +0000 (0:00:01.121) 0:07:10.313 **** 2026-02-04 05:31:02.383434 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383452 | orchestrator | 2026-02-04 05:31:02.383470 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-04 05:31:02.383489 | orchestrator | Wednesday 04 February 2026 05:30:36 +0000 (0:00:01.171) 0:07:11.484 **** 2026-02-04 05:31:02.383502 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:02.383513 | orchestrator | 2026-02-04 05:31:02.383524 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-04 05:31:02.383535 | orchestrator | Wednesday 04 February 2026 05:30:37 +0000 (0:00:01.590) 0:07:13.075 **** 2026-02-04 05:31:02.383546 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:02.383557 | orchestrator | 2026-02-04 05:31:02.383613 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-04 05:31:02.383626 | orchestrator | Wednesday 04 February 2026 05:30:39 +0000 (0:00:01.571) 0:07:14.646 **** 2026-02-04 05:31:02.383654 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383665 | orchestrator | 2026-02-04 05:31:02.383676 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-04 05:31:02.383687 | orchestrator | Wednesday 04 February 2026 05:30:40 +0000 (0:00:01.172) 0:07:15.819 **** 2026-02-04 05:31:02.383697 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:02.383708 | orchestrator | 2026-02-04 05:31:02.383719 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-04 05:31:02.383730 | orchestrator | Wednesday 04 February 2026 05:30:41 +0000 (0:00:01.170) 0:07:16.990 **** 2026-02-04 05:31:02.383741 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383751 | orchestrator | 2026-02-04 05:31:02.383762 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-04 05:31:02.383773 | orchestrator | Wednesday 04 February 2026 05:30:42 +0000 (0:00:01.179) 0:07:18.170 **** 2026-02-04 05:31:02.383784 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383794 | orchestrator | 2026-02-04 05:31:02.383837 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-04 05:31:02.383850 | orchestrator | Wednesday 04 February 2026 05:30:43 +0000 (0:00:01.105) 0:07:19.275 **** 2026-02-04 05:31:02.383861 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383872 | orchestrator | 2026-02-04 05:31:02.383882 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-04 05:31:02.383893 | orchestrator | Wednesday 04 February 2026 05:30:44 +0000 (0:00:01.154) 0:07:20.430 **** 2026-02-04 05:31:02.383904 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383914 | orchestrator | 2026-02-04 05:31:02.383925 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-04 05:31:02.383936 | orchestrator | Wednesday 04 February 2026 05:30:46 +0000 (0:00:01.170) 0:07:21.601 **** 2026-02-04 05:31:02.383946 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.383957 | orchestrator | 2026-02-04 05:31:02.383967 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-04 05:31:02.383978 | orchestrator | Wednesday 04 February 2026 05:30:47 +0000 (0:00:01.147) 0:07:22.748 **** 2026-02-04 05:31:02.383989 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:02.383999 | orchestrator | 2026-02-04 05:31:02.384010 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-04 05:31:02.384021 | orchestrator | Wednesday 04 February 2026 05:30:48 +0000 (0:00:01.144) 0:07:23.893 **** 2026-02-04 05:31:02.384031 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:02.384042 | orchestrator | 2026-02-04 05:31:02.384052 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-04 05:31:02.384063 | orchestrator | Wednesday 04 February 2026 05:30:49 +0000 (0:00:01.164) 0:07:25.057 **** 2026-02-04 05:31:02.384074 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:02.384084 | orchestrator | 2026-02-04 05:31:02.384095 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-04 05:31:02.384106 | orchestrator | Wednesday 04 February 2026 05:30:50 +0000 (0:00:01.163) 0:07:26.221 **** 2026-02-04 05:31:02.384116 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384127 | orchestrator | 2026-02-04 05:31:02.384138 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-04 05:31:02.384148 | orchestrator | Wednesday 04 February 2026 05:30:51 +0000 (0:00:01.191) 0:07:27.413 **** 2026-02-04 05:31:02.384159 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384170 | orchestrator | 2026-02-04 05:31:02.384180 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-04 05:31:02.384191 | orchestrator | Wednesday 04 February 2026 05:30:53 +0000 (0:00:01.168) 0:07:28.582 **** 2026-02-04 05:31:02.384202 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384212 | orchestrator | 2026-02-04 05:31:02.384223 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-04 05:31:02.384234 | orchestrator | Wednesday 04 February 2026 05:30:54 +0000 (0:00:01.151) 0:07:29.734 **** 2026-02-04 05:31:02.384252 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384263 | orchestrator | 2026-02-04 05:31:02.384274 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-04 05:31:02.384286 | orchestrator | Wednesday 04 February 2026 05:30:55 +0000 (0:00:01.163) 0:07:30.897 **** 2026-02-04 05:31:02.384304 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384320 | orchestrator | 2026-02-04 05:31:02.384337 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-04 05:31:02.384354 | orchestrator | Wednesday 04 February 2026 05:30:56 +0000 (0:00:01.139) 0:07:32.036 **** 2026-02-04 05:31:02.384370 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384385 | orchestrator | 2026-02-04 05:31:02.384401 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-04 05:31:02.384418 | orchestrator | Wednesday 04 February 2026 05:30:57 +0000 (0:00:01.158) 0:07:33.195 **** 2026-02-04 05:31:02.384435 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384453 | orchestrator | 2026-02-04 05:31:02.384469 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-04 05:31:02.384487 | orchestrator | Wednesday 04 February 2026 05:30:58 +0000 (0:00:01.107) 0:07:34.302 **** 2026-02-04 05:31:02.384506 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384525 | orchestrator | 2026-02-04 05:31:02.384544 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-04 05:31:02.384563 | orchestrator | Wednesday 04 February 2026 05:31:00 +0000 (0:00:01.165) 0:07:35.468 **** 2026-02-04 05:31:02.384581 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384600 | orchestrator | 2026-02-04 05:31:02.384618 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-04 05:31:02.384634 | orchestrator | Wednesday 04 February 2026 05:31:01 +0000 (0:00:01.169) 0:07:36.637 **** 2026-02-04 05:31:02.384645 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:02.384656 | orchestrator | 2026-02-04 05:31:02.384667 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-04 05:31:02.384686 | orchestrator | Wednesday 04 February 2026 05:31:02 +0000 (0:00:01.160) 0:07:37.798 **** 2026-02-04 05:31:54.736834 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737092 | orchestrator | 2026-02-04 05:31:54.737111 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-04 05:31:54.737125 | orchestrator | Wednesday 04 February 2026 05:31:03 +0000 (0:00:01.114) 0:07:38.913 **** 2026-02-04 05:31:54.737137 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737148 | orchestrator | 2026-02-04 05:31:54.737159 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-04 05:31:54.737170 | orchestrator | Wednesday 04 February 2026 05:31:04 +0000 (0:00:01.136) 0:07:40.049 **** 2026-02-04 05:31:54.737181 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:54.737192 | orchestrator | 2026-02-04 05:31:54.737203 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-04 05:31:54.737214 | orchestrator | Wednesday 04 February 2026 05:31:06 +0000 (0:00:02.046) 0:07:42.096 **** 2026-02-04 05:31:54.737225 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:54.737235 | orchestrator | 2026-02-04 05:31:54.737246 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-04 05:31:54.737257 | orchestrator | Wednesday 04 February 2026 05:31:09 +0000 (0:00:02.606) 0:07:44.703 **** 2026-02-04 05:31:54.737268 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-04 05:31:54.737279 | orchestrator | 2026-02-04 05:31:54.737290 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-04 05:31:54.737301 | orchestrator | Wednesday 04 February 2026 05:31:10 +0000 (0:00:01.501) 0:07:46.204 **** 2026-02-04 05:31:54.737311 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737322 | orchestrator | 2026-02-04 05:31:54.737333 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-04 05:31:54.737370 | orchestrator | Wednesday 04 February 2026 05:31:11 +0000 (0:00:01.124) 0:07:47.328 **** 2026-02-04 05:31:54.737383 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737397 | orchestrator | 2026-02-04 05:31:54.737410 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-04 05:31:54.737422 | orchestrator | Wednesday 04 February 2026 05:31:13 +0000 (0:00:01.141) 0:07:48.470 **** 2026-02-04 05:31:54.737434 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-04 05:31:54.737448 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-04 05:31:54.737461 | orchestrator | 2026-02-04 05:31:54.737473 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-04 05:31:54.737485 | orchestrator | Wednesday 04 February 2026 05:31:14 +0000 (0:00:01.893) 0:07:50.363 **** 2026-02-04 05:31:54.737499 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:54.737511 | orchestrator | 2026-02-04 05:31:54.737524 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-04 05:31:54.737536 | orchestrator | Wednesday 04 February 2026 05:31:16 +0000 (0:00:01.789) 0:07:52.152 **** 2026-02-04 05:31:54.737549 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737561 | orchestrator | 2026-02-04 05:31:54.737573 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-04 05:31:54.737585 | orchestrator | Wednesday 04 February 2026 05:31:17 +0000 (0:00:01.201) 0:07:53.353 **** 2026-02-04 05:31:54.737598 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737610 | orchestrator | 2026-02-04 05:31:54.737623 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-04 05:31:54.737636 | orchestrator | Wednesday 04 February 2026 05:31:19 +0000 (0:00:01.144) 0:07:54.498 **** 2026-02-04 05:31:54.737649 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737660 | orchestrator | 2026-02-04 05:31:54.737673 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-04 05:31:54.737685 | orchestrator | Wednesday 04 February 2026 05:31:20 +0000 (0:00:01.125) 0:07:55.623 **** 2026-02-04 05:31:54.737698 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-04 05:31:54.737711 | orchestrator | 2026-02-04 05:31:54.737723 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-04 05:31:54.737736 | orchestrator | Wednesday 04 February 2026 05:31:21 +0000 (0:00:01.518) 0:07:57.141 **** 2026-02-04 05:31:54.737747 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:54.737757 | orchestrator | 2026-02-04 05:31:54.737768 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-04 05:31:54.737779 | orchestrator | Wednesday 04 February 2026 05:31:23 +0000 (0:00:02.013) 0:07:59.155 **** 2026-02-04 05:31:54.737790 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-04 05:31:54.737800 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-04 05:31:54.737811 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-04 05:31:54.737822 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737832 | orchestrator | 2026-02-04 05:31:54.737870 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-04 05:31:54.737883 | orchestrator | Wednesday 04 February 2026 05:31:24 +0000 (0:00:01.137) 0:08:00.292 **** 2026-02-04 05:31:54.737893 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737904 | orchestrator | 2026-02-04 05:31:54.737915 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-04 05:31:54.737925 | orchestrator | Wednesday 04 February 2026 05:31:26 +0000 (0:00:01.146) 0:08:01.439 **** 2026-02-04 05:31:54.737936 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737947 | orchestrator | 2026-02-04 05:31:54.737958 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-04 05:31:54.737977 | orchestrator | Wednesday 04 February 2026 05:31:27 +0000 (0:00:01.177) 0:08:02.616 **** 2026-02-04 05:31:54.737988 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.737998 | orchestrator | 2026-02-04 05:31:54.738099 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-04 05:31:54.738151 | orchestrator | Wednesday 04 February 2026 05:31:28 +0000 (0:00:01.128) 0:08:03.745 **** 2026-02-04 05:31:54.738171 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738189 | orchestrator | 2026-02-04 05:31:54.738208 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-04 05:31:54.738226 | orchestrator | Wednesday 04 February 2026 05:31:29 +0000 (0:00:01.145) 0:08:04.891 **** 2026-02-04 05:31:54.738245 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738265 | orchestrator | 2026-02-04 05:31:54.738280 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-04 05:31:54.738298 | orchestrator | Wednesday 04 February 2026 05:31:30 +0000 (0:00:01.144) 0:08:06.036 **** 2026-02-04 05:31:54.738325 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:54.738345 | orchestrator | 2026-02-04 05:31:54.738362 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-04 05:31:54.738380 | orchestrator | Wednesday 04 February 2026 05:31:33 +0000 (0:00:02.536) 0:08:08.572 **** 2026-02-04 05:31:54.738397 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:54.738415 | orchestrator | 2026-02-04 05:31:54.738431 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-04 05:31:54.738449 | orchestrator | Wednesday 04 February 2026 05:31:34 +0000 (0:00:01.151) 0:08:09.723 **** 2026-02-04 05:31:54.738466 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-04 05:31:54.738483 | orchestrator | 2026-02-04 05:31:54.738502 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-04 05:31:54.738520 | orchestrator | Wednesday 04 February 2026 05:31:35 +0000 (0:00:01.480) 0:08:11.204 **** 2026-02-04 05:31:54.738539 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738559 | orchestrator | 2026-02-04 05:31:54.738577 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-04 05:31:54.738596 | orchestrator | Wednesday 04 February 2026 05:31:36 +0000 (0:00:01.176) 0:08:12.381 **** 2026-02-04 05:31:54.738616 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738634 | orchestrator | 2026-02-04 05:31:54.738652 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-04 05:31:54.738663 | orchestrator | Wednesday 04 February 2026 05:31:38 +0000 (0:00:01.147) 0:08:13.529 **** 2026-02-04 05:31:54.738674 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738684 | orchestrator | 2026-02-04 05:31:54.738695 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-04 05:31:54.738705 | orchestrator | Wednesday 04 February 2026 05:31:39 +0000 (0:00:01.139) 0:08:14.668 **** 2026-02-04 05:31:54.738716 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738727 | orchestrator | 2026-02-04 05:31:54.738737 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-04 05:31:54.738748 | orchestrator | Wednesday 04 February 2026 05:31:40 +0000 (0:00:01.164) 0:08:15.833 **** 2026-02-04 05:31:54.738759 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738769 | orchestrator | 2026-02-04 05:31:54.738780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-04 05:31:54.738793 | orchestrator | Wednesday 04 February 2026 05:31:41 +0000 (0:00:01.153) 0:08:16.986 **** 2026-02-04 05:31:54.738812 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738829 | orchestrator | 2026-02-04 05:31:54.738876 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-04 05:31:54.738895 | orchestrator | Wednesday 04 February 2026 05:31:42 +0000 (0:00:01.119) 0:08:18.106 **** 2026-02-04 05:31:54.738911 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.738930 | orchestrator | 2026-02-04 05:31:54.738948 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-04 05:31:54.738983 | orchestrator | Wednesday 04 February 2026 05:31:43 +0000 (0:00:01.181) 0:08:19.287 **** 2026-02-04 05:31:54.739000 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:31:54.739011 | orchestrator | 2026-02-04 05:31:54.739022 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-04 05:31:54.739033 | orchestrator | Wednesday 04 February 2026 05:31:45 +0000 (0:00:01.202) 0:08:20.490 **** 2026-02-04 05:31:54.739043 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:31:54.739054 | orchestrator | 2026-02-04 05:31:54.739065 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-04 05:31:54.739075 | orchestrator | Wednesday 04 February 2026 05:31:46 +0000 (0:00:01.194) 0:08:21.685 **** 2026-02-04 05:31:54.739086 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-04 05:31:54.739097 | orchestrator | 2026-02-04 05:31:54.739114 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-04 05:31:54.739133 | orchestrator | Wednesday 04 February 2026 05:31:47 +0000 (0:00:01.538) 0:08:23.223 **** 2026-02-04 05:31:54.739150 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-04 05:31:54.739169 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-04 05:31:54.739188 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-04 05:31:54.739206 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-04 05:31:54.739224 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-04 05:31:54.739241 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-04 05:31:54.739258 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-04 05:31:54.739275 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-04 05:31:54.739293 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-04 05:31:54.739311 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-04 05:31:54.739329 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-04 05:31:54.739346 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-04 05:31:54.739365 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-04 05:31:54.739395 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-04 05:31:54.739429 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-04 05:32:43.066512 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-04 05:32:43.066633 | orchestrator | 2026-02-04 05:32:43.066658 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-04 05:32:43.066676 | orchestrator | Wednesday 04 February 2026 05:31:54 +0000 (0:00:06.915) 0:08:30.139 **** 2026-02-04 05:32:43.066693 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.066705 | orchestrator | 2026-02-04 05:32:43.066715 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-04 05:32:43.066725 | orchestrator | Wednesday 04 February 2026 05:31:55 +0000 (0:00:01.149) 0:08:31.289 **** 2026-02-04 05:32:43.066734 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.066744 | orchestrator | 2026-02-04 05:32:43.066755 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-04 05:32:43.066764 | orchestrator | Wednesday 04 February 2026 05:31:56 +0000 (0:00:01.128) 0:08:32.418 **** 2026-02-04 05:32:43.066774 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.066783 | orchestrator | 2026-02-04 05:32:43.066793 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-04 05:32:43.066802 | orchestrator | Wednesday 04 February 2026 05:31:58 +0000 (0:00:01.172) 0:08:33.591 **** 2026-02-04 05:32:43.066812 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.066821 | orchestrator | 2026-02-04 05:32:43.066831 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-04 05:32:43.066840 | orchestrator | Wednesday 04 February 2026 05:31:59 +0000 (0:00:01.103) 0:08:34.695 **** 2026-02-04 05:32:43.066921 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.066935 | orchestrator | 2026-02-04 05:32:43.066945 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-04 05:32:43.066954 | orchestrator | Wednesday 04 February 2026 05:32:00 +0000 (0:00:01.154) 0:08:35.850 **** 2026-02-04 05:32:43.066964 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.066973 | orchestrator | 2026-02-04 05:32:43.066982 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-04 05:32:43.066993 | orchestrator | Wednesday 04 February 2026 05:32:01 +0000 (0:00:01.122) 0:08:36.972 **** 2026-02-04 05:32:43.067003 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067012 | orchestrator | 2026-02-04 05:32:43.067022 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-04 05:32:43.067031 | orchestrator | Wednesday 04 February 2026 05:32:02 +0000 (0:00:01.157) 0:08:38.130 **** 2026-02-04 05:32:43.067041 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067050 | orchestrator | 2026-02-04 05:32:43.067063 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-04 05:32:43.067075 | orchestrator | Wednesday 04 February 2026 05:32:03 +0000 (0:00:01.187) 0:08:39.318 **** 2026-02-04 05:32:43.067086 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067098 | orchestrator | 2026-02-04 05:32:43.067110 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-04 05:32:43.067121 | orchestrator | Wednesday 04 February 2026 05:32:05 +0000 (0:00:01.129) 0:08:40.448 **** 2026-02-04 05:32:43.067133 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067145 | orchestrator | 2026-02-04 05:32:43.067156 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-04 05:32:43.067167 | orchestrator | Wednesday 04 February 2026 05:32:06 +0000 (0:00:01.144) 0:08:41.593 **** 2026-02-04 05:32:43.067178 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067190 | orchestrator | 2026-02-04 05:32:43.067202 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-04 05:32:43.067213 | orchestrator | Wednesday 04 February 2026 05:32:07 +0000 (0:00:01.122) 0:08:42.715 **** 2026-02-04 05:32:43.067225 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067236 | orchestrator | 2026-02-04 05:32:43.067248 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-04 05:32:43.067260 | orchestrator | Wednesday 04 February 2026 05:32:08 +0000 (0:00:01.183) 0:08:43.899 **** 2026-02-04 05:32:43.067272 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067284 | orchestrator | 2026-02-04 05:32:43.067295 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-04 05:32:43.067307 | orchestrator | Wednesday 04 February 2026 05:32:09 +0000 (0:00:01.261) 0:08:45.160 **** 2026-02-04 05:32:43.067319 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067330 | orchestrator | 2026-02-04 05:32:43.067339 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-04 05:32:43.067349 | orchestrator | Wednesday 04 February 2026 05:32:10 +0000 (0:00:01.123) 0:08:46.284 **** 2026-02-04 05:32:43.067358 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067368 | orchestrator | 2026-02-04 05:32:43.067377 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-04 05:32:43.067386 | orchestrator | Wednesday 04 February 2026 05:32:12 +0000 (0:00:01.224) 0:08:47.509 **** 2026-02-04 05:32:43.067396 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067405 | orchestrator | 2026-02-04 05:32:43.067415 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-04 05:32:43.067424 | orchestrator | Wednesday 04 February 2026 05:32:13 +0000 (0:00:01.116) 0:08:48.625 **** 2026-02-04 05:32:43.067434 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067443 | orchestrator | 2026-02-04 05:32:43.067460 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-04 05:32:43.067471 | orchestrator | Wednesday 04 February 2026 05:32:14 +0000 (0:00:01.129) 0:08:49.755 **** 2026-02-04 05:32:43.067481 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067490 | orchestrator | 2026-02-04 05:32:43.067500 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-04 05:32:43.067509 | orchestrator | Wednesday 04 February 2026 05:32:15 +0000 (0:00:01.157) 0:08:50.913 **** 2026-02-04 05:32:43.067533 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067543 | orchestrator | 2026-02-04 05:32:43.067568 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-04 05:32:43.067579 | orchestrator | Wednesday 04 February 2026 05:32:16 +0000 (0:00:01.137) 0:08:52.050 **** 2026-02-04 05:32:43.067588 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067632 | orchestrator | 2026-02-04 05:32:43.067653 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-04 05:32:43.067663 | orchestrator | Wednesday 04 February 2026 05:32:17 +0000 (0:00:01.162) 0:08:53.213 **** 2026-02-04 05:32:43.067673 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067682 | orchestrator | 2026-02-04 05:32:43.067692 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-04 05:32:43.067701 | orchestrator | Wednesday 04 February 2026 05:32:18 +0000 (0:00:01.131) 0:08:54.345 **** 2026-02-04 05:32:43.067711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 05:32:43.067721 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 05:32:43.067731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 05:32:43.067740 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067750 | orchestrator | 2026-02-04 05:32:43.067759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-04 05:32:43.067769 | orchestrator | Wednesday 04 February 2026 05:32:20 +0000 (0:00:01.750) 0:08:56.095 **** 2026-02-04 05:32:43.067779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 05:32:43.067788 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 05:32:43.067798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 05:32:43.067807 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067817 | orchestrator | 2026-02-04 05:32:43.067826 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-04 05:32:43.067836 | orchestrator | Wednesday 04 February 2026 05:32:22 +0000 (0:00:01.564) 0:08:57.660 **** 2026-02-04 05:32:43.067846 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-04 05:32:43.067855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-04 05:32:43.067865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-04 05:32:43.067897 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067913 | orchestrator | 2026-02-04 05:32:43.067923 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-04 05:32:43.067932 | orchestrator | Wednesday 04 February 2026 05:32:23 +0000 (0:00:01.462) 0:08:59.122 **** 2026-02-04 05:32:43.067942 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067951 | orchestrator | 2026-02-04 05:32:43.067961 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-04 05:32:43.067970 | orchestrator | Wednesday 04 February 2026 05:32:24 +0000 (0:00:01.128) 0:09:00.251 **** 2026-02-04 05:32:43.067980 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-04 05:32:43.067990 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.067999 | orchestrator | 2026-02-04 05:32:43.068009 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-04 05:32:43.068018 | orchestrator | Wednesday 04 February 2026 05:32:26 +0000 (0:00:01.389) 0:09:01.641 **** 2026-02-04 05:32:43.068028 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:32:43.068046 | orchestrator | 2026-02-04 05:32:43.068056 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-04 05:32:43.068065 | orchestrator | Wednesday 04 February 2026 05:32:28 +0000 (0:00:01.912) 0:09:03.554 **** 2026-02-04 05:32:43.068075 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:32:43.068084 | orchestrator | 2026-02-04 05:32:43.068094 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-04 05:32:43.068103 | orchestrator | Wednesday 04 February 2026 05:32:29 +0000 (0:00:01.207) 0:09:04.762 **** 2026-02-04 05:32:43.068113 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-04 05:32:43.068123 | orchestrator | 2026-02-04 05:32:43.068133 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-04 05:32:43.068142 | orchestrator | Wednesday 04 February 2026 05:32:30 +0000 (0:00:01.482) 0:09:06.244 **** 2026-02-04 05:32:43.068152 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-04 05:32:43.068162 | orchestrator | 2026-02-04 05:32:43.068171 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-04 05:32:43.068181 | orchestrator | Wednesday 04 February 2026 05:32:34 +0000 (0:00:03.503) 0:09:09.747 **** 2026-02-04 05:32:43.068190 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:32:43.068200 | orchestrator | 2026-02-04 05:32:43.068209 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-04 05:32:43.068219 | orchestrator | Wednesday 04 February 2026 05:32:35 +0000 (0:00:01.189) 0:09:10.937 **** 2026-02-04 05:32:43.068228 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:32:43.068237 | orchestrator | 2026-02-04 05:32:43.068247 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-04 05:32:43.068256 | orchestrator | Wednesday 04 February 2026 05:32:36 +0000 (0:00:01.158) 0:09:12.095 **** 2026-02-04 05:32:43.068266 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:32:43.068275 | orchestrator | 2026-02-04 05:32:43.068285 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-04 05:32:43.068294 | orchestrator | Wednesday 04 February 2026 05:32:37 +0000 (0:00:01.201) 0:09:13.297 **** 2026-02-04 05:32:43.068304 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:32:43.068314 | orchestrator | 2026-02-04 05:32:43.068323 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-04 05:32:43.068333 | orchestrator | Wednesday 04 February 2026 05:32:39 +0000 (0:00:02.059) 0:09:15.357 **** 2026-02-04 05:32:43.068342 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:32:43.068352 | orchestrator | 2026-02-04 05:32:43.068361 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-04 05:32:43.068370 | orchestrator | Wednesday 04 February 2026 05:32:41 +0000 (0:00:01.619) 0:09:16.976 **** 2026-02-04 05:32:43.068380 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:32:43.068389 | orchestrator | 2026-02-04 05:32:43.068411 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-04 05:33:41.094569 | orchestrator | Wednesday 04 February 2026 05:32:43 +0000 (0:00:01.503) 0:09:18.480 **** 2026-02-04 05:33:41.094685 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.094732 | orchestrator | 2026-02-04 05:33:41.094756 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-04 05:33:41.094767 | orchestrator | Wednesday 04 February 2026 05:32:44 +0000 (0:00:01.546) 0:09:20.026 **** 2026-02-04 05:33:41.094777 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.094787 | orchestrator | 2026-02-04 05:33:41.094797 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-04 05:33:41.094807 | orchestrator | Wednesday 04 February 2026 05:32:46 +0000 (0:00:01.765) 0:09:21.792 **** 2026-02-04 05:33:41.094818 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.094827 | orchestrator | 2026-02-04 05:33:41.094837 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-04 05:33:41.094848 | orchestrator | Wednesday 04 February 2026 05:32:48 +0000 (0:00:01.718) 0:09:23.511 **** 2026-02-04 05:33:41.094882 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-04 05:33:41.094894 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-04 05:33:41.094905 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-04 05:33:41.094959 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-04 05:33:41.094971 | orchestrator | 2026-02-04 05:33:41.094982 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-04 05:33:41.094992 | orchestrator | Wednesday 04 February 2026 05:32:51 +0000 (0:00:03.798) 0:09:27.309 **** 2026-02-04 05:33:41.095003 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:33:41.095014 | orchestrator | 2026-02-04 05:33:41.095025 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-04 05:33:41.095035 | orchestrator | Wednesday 04 February 2026 05:32:53 +0000 (0:00:02.080) 0:09:29.390 **** 2026-02-04 05:33:41.095046 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095056 | orchestrator | 2026-02-04 05:33:41.095067 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-04 05:33:41.095077 | orchestrator | Wednesday 04 February 2026 05:32:55 +0000 (0:00:01.170) 0:09:30.560 **** 2026-02-04 05:33:41.095087 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095098 | orchestrator | 2026-02-04 05:33:41.095108 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-04 05:33:41.095119 | orchestrator | Wednesday 04 February 2026 05:32:56 +0000 (0:00:01.155) 0:09:31.715 **** 2026-02-04 05:33:41.095130 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095140 | orchestrator | 2026-02-04 05:33:41.095150 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-04 05:33:41.095162 | orchestrator | Wednesday 04 February 2026 05:32:58 +0000 (0:00:02.092) 0:09:33.808 **** 2026-02-04 05:33:41.095174 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095185 | orchestrator | 2026-02-04 05:33:41.095197 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-04 05:33:41.095209 | orchestrator | Wednesday 04 February 2026 05:32:59 +0000 (0:00:01.487) 0:09:35.296 **** 2026-02-04 05:33:41.095221 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:33:41.095233 | orchestrator | 2026-02-04 05:33:41.095245 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-04 05:33:41.095258 | orchestrator | Wednesday 04 February 2026 05:33:01 +0000 (0:00:01.156) 0:09:36.452 **** 2026-02-04 05:33:41.095270 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-04 05:33:41.095282 | orchestrator | 2026-02-04 05:33:41.095294 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-04 05:33:41.095304 | orchestrator | Wednesday 04 February 2026 05:33:02 +0000 (0:00:01.462) 0:09:37.915 **** 2026-02-04 05:33:41.095314 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:33:41.095325 | orchestrator | 2026-02-04 05:33:41.095334 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-04 05:33:41.095345 | orchestrator | Wednesday 04 February 2026 05:33:03 +0000 (0:00:01.135) 0:09:39.050 **** 2026-02-04 05:33:41.095355 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:33:41.095365 | orchestrator | 2026-02-04 05:33:41.095375 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-04 05:33:41.095386 | orchestrator | Wednesday 04 February 2026 05:33:04 +0000 (0:00:01.162) 0:09:40.213 **** 2026-02-04 05:33:41.095397 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-04 05:33:41.095408 | orchestrator | 2026-02-04 05:33:41.095420 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-04 05:33:41.095432 | orchestrator | Wednesday 04 February 2026 05:33:06 +0000 (0:00:01.511) 0:09:41.724 **** 2026-02-04 05:33:41.095443 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095455 | orchestrator | 2026-02-04 05:33:41.095467 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-04 05:33:41.095479 | orchestrator | Wednesday 04 February 2026 05:33:08 +0000 (0:00:02.337) 0:09:44.062 **** 2026-02-04 05:33:41.095500 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095511 | orchestrator | 2026-02-04 05:33:41.095522 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-04 05:33:41.095533 | orchestrator | Wednesday 04 February 2026 05:33:10 +0000 (0:00:02.008) 0:09:46.070 **** 2026-02-04 05:33:41.095545 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095556 | orchestrator | 2026-02-04 05:33:41.095566 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-04 05:33:41.095575 | orchestrator | Wednesday 04 February 2026 05:33:13 +0000 (0:00:02.479) 0:09:48.549 **** 2026-02-04 05:33:41.095584 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:33:41.095594 | orchestrator | 2026-02-04 05:33:41.095604 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-04 05:33:41.095614 | orchestrator | Wednesday 04 February 2026 05:33:16 +0000 (0:00:03.295) 0:09:51.845 **** 2026-02-04 05:33:41.095639 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-04 05:33:41.095649 | orchestrator | 2026-02-04 05:33:41.095680 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-04 05:33:41.095692 | orchestrator | Wednesday 04 February 2026 05:33:18 +0000 (0:00:01.667) 0:09:53.513 **** 2026-02-04 05:33:41.095702 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095712 | orchestrator | 2026-02-04 05:33:41.095723 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-04 05:33:41.095732 | orchestrator | Wednesday 04 February 2026 05:33:20 +0000 (0:00:02.304) 0:09:55.818 **** 2026-02-04 05:33:41.095742 | orchestrator | ok: [testbed-node-0] 2026-02-04 05:33:41.095752 | orchestrator | 2026-02-04 05:33:41.095761 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-04 05:33:41.095771 | orchestrator | Wednesday 04 February 2026 05:33:23 +0000 (0:00:03.002) 0:09:58.821 **** 2026-02-04 05:33:41.095780 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:33:41.095790 | orchestrator | 2026-02-04 05:33:41.095799 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-04 05:33:41.095809 | orchestrator | Wednesday 04 February 2026 05:33:24 +0000 (0:00:01.126) 0:09:59.947 **** 2026-02-04 05:33:41.095822 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-04 05:33:41.095834 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-04 05:33:41.095845 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-04 05:33:41.095855 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-04 05:33:41.095867 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-04 05:33:41.095887 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__6181aec35510f62dd16e6842e1ad80b3ea59fb50'}])  2026-02-04 05:33:41.095899 | orchestrator | 2026-02-04 05:33:41.095910 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-04 05:33:41.095955 | orchestrator | Wednesday 04 February 2026 05:33:34 +0000 (0:00:10.212) 0:10:10.160 **** 2026-02-04 05:33:41.095966 | orchestrator | changed: [testbed-node-0] 2026-02-04 05:33:41.095976 | orchestrator | 2026-02-04 05:33:41.095987 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-04 05:33:41.095998 | orchestrator | Wednesday 04 February 2026 05:33:37 +0000 (0:00:02.639) 0:10:12.800 **** 2026-02-04 05:33:41.096009 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-04 05:33:41.096020 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-04 05:33:41.096030 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-04 05:33:41.096041 | orchestrator | 2026-02-04 05:33:41.096052 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-04 05:33:41.096063 | orchestrator | Wednesday 04 February 2026 05:33:39 +0000 (0:00:02.298) 0:10:15.098 **** 2026-02-04 05:33:41.096074 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-04 05:33:41.096085 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-04 05:33:41.096096 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-04 05:33:41.096106 | orchestrator | skipping: [testbed-node-0] 2026-02-04 05:33:41.096117 | orchestrator | 2026-02-04 05:33:41.096127 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-04 05:33:41.096154 | orchestrator | Wednesday 04 February 2026 05:33:41 +0000 (0:00:01.411) 0:10:16.510 **** 2026-02-04 06:05:02.774370 | orchestrator | skipping: [testbed-node-0] 2026-02-04 06:05:02.774550 | orchestrator | 2026-02-04 06:05:02.774571 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-04 06:05:02.774584 | orchestrator | Wednesday 04 February 2026 05:33:42 +0000 (0:00:01.120) 0:10:17.630 **** 2026-02-04 06:05:02.774610 | orchestrator | 2026-02-04 06:05:02.774621 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.774754 | orchestrator | 2026-02-04 06:05:02.774767 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.774778 | orchestrator | 2026-02-04 06:05:02.774789 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.774800 | orchestrator | 2026-02-04 06:05:02.774811 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.774822 | orchestrator | 2026-02-04 06:05:02.774833 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.774847 | orchestrator | 2026-02-04 06:05:02.774861 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.774876 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-02-04 06:05:02.774892 | orchestrator | 2026-02-04 06:05:02.774906 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.774992 | orchestrator | 2026-02-04 06:05:02.775007 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775048 | orchestrator | 2026-02-04 06:05:02.775062 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775074 | orchestrator | 2026-02-04 06:05:02.775088 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775102 | orchestrator | 2026-02-04 06:05:02.775114 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775128 | orchestrator | 2026-02-04 06:05:02.775141 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775154 | orchestrator | 2026-02-04 06:05:02.775167 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775180 | orchestrator | 2026-02-04 06:05:02.775194 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775204 | orchestrator | 2026-02-04 06:05:02.775215 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775226 | orchestrator | 2026-02-04 06:05:02.775237 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775248 | orchestrator | 2026-02-04 06:05:02.775259 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775270 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-02-04 06:05:02.775281 | orchestrator | 2026-02-04 06:05:02.775292 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775304 | orchestrator | 2026-02-04 06:05:02.775315 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775326 | orchestrator | 2026-02-04 06:05:02.775337 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775348 | orchestrator | 2026-02-04 06:05:02.775358 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775369 | orchestrator | 2026-02-04 06:05:02.775381 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775392 | orchestrator | 2026-02-04 06:05:02.775403 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775414 | orchestrator | 2026-02-04 06:05:02.775425 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775436 | orchestrator | 2026-02-04 06:05:02.775447 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775458 | orchestrator | 2026-02-04 06:05:02.775469 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775480 | orchestrator | 2026-02-04 06:05:02.775491 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775502 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-02-04 06:05:02.775513 | orchestrator | 2026-02-04 06:05:02.775524 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775535 | orchestrator | 2026-02-04 06:05:02.775546 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775557 | orchestrator | 2026-02-04 06:05:02.775568 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775587 | orchestrator | 2026-02-04 06:05:02.775598 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775609 | orchestrator | 2026-02-04 06:05:02.775683 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775698 | orchestrator | 2026-02-04 06:05:02.775709 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775720 | orchestrator | 2026-02-04 06:05:02.775731 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775742 | orchestrator | 2026-02-04 06:05:02.775752 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775763 | orchestrator | 2026-02-04 06:05:02.775781 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775799 | orchestrator | 2026-02-04 06:05:02.775817 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775834 | orchestrator | 2026-02-04 06:05:02.775852 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775870 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-02-04 06:05:02.775887 | orchestrator | 2026-02-04 06:05:02.775906 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775926 | orchestrator | 2026-02-04 06:05:02.775945 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775963 | orchestrator | 2026-02-04 06:05:02.775977 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.775988 | orchestrator | 2026-02-04 06:05:02.775999 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776010 | orchestrator | 2026-02-04 06:05:02.776021 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776031 | orchestrator | 2026-02-04 06:05:02.776047 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776064 | orchestrator | 2026-02-04 06:05:02.776093 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776113 | orchestrator | 2026-02-04 06:05:02.776130 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776147 | orchestrator | 2026-02-04 06:05:02.776165 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776181 | orchestrator | 2026-02-04 06:05:02.776197 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776214 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-02-04 06:05:02.776230 | orchestrator | 2026-02-04 06:05:02.776247 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776264 | orchestrator | 2026-02-04 06:05:02.776281 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776299 | orchestrator | 2026-02-04 06:05:02.776318 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776335 | orchestrator | 2026-02-04 06:05:02.776353 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776372 | orchestrator | 2026-02-04 06:05:02.776390 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776424 | orchestrator | 2026-02-04 06:05:02.776437 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776447 | orchestrator | 2026-02-04 06:05:02.776458 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776469 | orchestrator | 2026-02-04 06:05:02.776480 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776490 | orchestrator | 2026-02-04 06:05:02.776501 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776512 | orchestrator | 2026-02-04 06:05:02.776522 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776533 | orchestrator | 2026-02-04 06:05:02.776544 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-04 06:05:02.776558 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.10", "quorum_status", "--format", "json"], "delta": "0:05:00.286996", "end": "2026-02-04 06:05:01.202912", "msg": "non-zero return code", "rc": 1, "start": "2026-02-04 06:00:00.915916", "stderr": "2026-02-04T06:05:01.181+0000 7736b752f640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-02-04T06:05:01.181+0000 7736b752f640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-02-04 06:05:02.776573 | orchestrator | 2026-02-04 06:05:02.776592 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-02-04 06:05:02.776618 | orchestrator | Wednesday 04 February 2026 06:05:02 +0000 (0:31:20.561) 0:41:38.192 **** 2026-02-04 06:05:10.087901 | orchestrator | ok: [testbed-node-0] 2026-02-04 06:05:10.087997 | orchestrator | 2026-02-04 06:05:10.088010 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-02-04 06:05:10.088021 | orchestrator | Wednesday 04 February 2026 06:05:04 +0000 (0:00:01.933) 0:41:40.126 **** 2026-02-04 06:05:10.088029 | orchestrator | ok: [testbed-node-0] 2026-02-04 06:05:10.088037 | orchestrator | 2026-02-04 06:05:10.088045 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-02-04 06:05:10.088054 | orchestrator | Wednesday 04 February 2026 06:05:06 +0000 (0:00:01.846) 0:41:41.972 **** 2026-02-04 06:05:10.088063 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-02-04 06:05:10.088073 | orchestrator | 2026-02-04 06:05:10.088081 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-04 06:05:10.088089 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-04 06:05:10.088098 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-04 06:05:10.088106 | orchestrator | testbed-node-0 : ok=121  changed=7  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-02-04 06:05:10.088115 | orchestrator | testbed-node-1 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-04 06:05:10.088123 | orchestrator | testbed-node-2 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-04 06:05:10.088131 | orchestrator | testbed-node-3 : ok=33  changed=1  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-02-04 06:05:10.088161 | orchestrator | testbed-node-4 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-04 06:05:10.088169 | orchestrator | testbed-node-5 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-04 06:05:10.088177 | orchestrator | 2026-02-04 06:05:10.088185 | orchestrator | 2026-02-04 06:05:10.088193 | orchestrator | 2026-02-04 06:05:10.088201 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-04 06:05:10.088209 | orchestrator | Wednesday 04 February 2026 06:05:09 +0000 (0:00:02.854) 0:41:44.826 **** 2026-02-04 06:05:10.088217 | orchestrator | =============================================================================== 2026-02-04 06:05:10.088225 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1880.56s 2026-02-04 06:05:10.088233 | orchestrator | Gather and delegate facts ---------------------------------------------- 36.39s 2026-02-04 06:05:10.088241 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.41s 2026-02-04 06:05:10.088249 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.26s 2026-02-04 06:05:10.088257 | orchestrator | Set cluster configs ---------------------------------------------------- 10.49s 2026-02-04 06:05:10.088265 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 10.21s 2026-02-04 06:05:10.088272 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.92s 2026-02-04 06:05:10.088280 | orchestrator | Gather facts ------------------------------------------------------------ 6.37s 2026-02-04 06:05:10.088288 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 5.33s 2026-02-04 06:05:10.088297 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 4.33s 2026-02-04 06:05:10.088305 | orchestrator | Stop ceph mon ----------------------------------------------------------- 4.10s 2026-02-04 06:05:10.088312 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.80s 2026-02-04 06:05:10.088320 | orchestrator | Mask the mgr service ---------------------------------------------------- 3.52s 2026-02-04 06:05:10.088328 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 3.50s 2026-02-04 06:05:10.088336 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.35s 2026-02-04 06:05:10.088344 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 3.30s 2026-02-04 06:05:10.088352 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 3.30s 2026-02-04 06:05:10.088360 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.27s 2026-02-04 06:05:10.088368 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.25s 2026-02-04 06:05:10.088376 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 3.00s 2026-02-04 06:05:10.749896 | orchestrator | ERROR 2026-02-04 06:05:10.750333 | orchestrator | { 2026-02-04 06:05:10.750437 | orchestrator | "delta": "2:11:56.111087", 2026-02-04 06:05:10.750508 | orchestrator | "end": "2026-02-04 06:05:10.429101", 2026-02-04 06:05:10.750569 | orchestrator | "msg": "non-zero return code", 2026-02-04 06:05:10.750626 | orchestrator | "rc": 2, 2026-02-04 06:05:10.750679 | orchestrator | "start": "2026-02-04 03:53:14.318014" 2026-02-04 06:05:10.750820 | orchestrator | } failure 2026-02-04 06:05:10.986542 | 2026-02-04 06:05:10.986811 | PLAY RECAP 2026-02-04 06:05:10.986923 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-04 06:05:10.986951 | 2026-02-04 06:05:11.223648 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-04 06:05:11.225648 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-04 06:05:12.010569 | 2026-02-04 06:05:12.010771 | PLAY [Post output play] 2026-02-04 06:05:12.028847 | 2026-02-04 06:05:12.029042 | LOOP [stage-output : Register sources] 2026-02-04 06:05:12.101809 | 2026-02-04 06:05:12.102138 | TASK [stage-output : Check sudo] 2026-02-04 06:05:12.937244 | orchestrator | sudo: a password is required 2026-02-04 06:05:13.143278 | orchestrator | ok: Runtime: 0:00:00.015631 2026-02-04 06:05:13.158589 | 2026-02-04 06:05:13.158769 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-04 06:05:13.197455 | 2026-02-04 06:05:13.197760 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-04 06:05:13.276185 | orchestrator | ok 2026-02-04 06:05:13.285784 | 2026-02-04 06:05:13.285937 | LOOP [stage-output : Ensure target folders exist] 2026-02-04 06:05:13.744301 | orchestrator | ok: "docs" 2026-02-04 06:05:13.744589 | 2026-02-04 06:05:13.987382 | orchestrator | ok: "artifacts" 2026-02-04 06:05:14.252136 | orchestrator | ok: "logs" 2026-02-04 06:05:14.274042 | 2026-02-04 06:05:14.274195 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-04 06:05:14.319036 | 2026-02-04 06:05:14.319285 | TASK [stage-output : Make all log files readable] 2026-02-04 06:05:14.621603 | orchestrator | ok 2026-02-04 06:05:14.630168 | 2026-02-04 06:05:14.630304 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-04 06:05:14.664841 | orchestrator | skipping: Conditional result was False 2026-02-04 06:05:14.678505 | 2026-02-04 06:05:14.678652 | TASK [stage-output : Discover log files for compression] 2026-02-04 06:05:14.702547 | orchestrator | skipping: Conditional result was False 2026-02-04 06:05:14.711823 | 2026-02-04 06:05:14.711967 | LOOP [stage-output : Archive everything from logs] 2026-02-04 06:05:14.749915 | 2026-02-04 06:05:14.750066 | PLAY [Post cleanup play] 2026-02-04 06:05:14.758153 | 2026-02-04 06:05:14.758256 | TASK [Set cloud fact (Zuul deployment)] 2026-02-04 06:05:14.809659 | orchestrator | ok 2026-02-04 06:05:14.818399 | 2026-02-04 06:05:14.818501 | TASK [Set cloud fact (local deployment)] 2026-02-04 06:05:14.841533 | orchestrator | skipping: Conditional result was False 2026-02-04 06:05:14.850320 | 2026-02-04 06:05:14.850429 | TASK [Clean the cloud environment] 2026-02-04 06:05:15.425489 | orchestrator | 2026-02-04 06:05:15 - clean up servers 2026-02-04 06:05:16.287886 | orchestrator | 2026-02-04 06:05:16 - testbed-manager 2026-02-04 06:05:16.372295 | orchestrator | 2026-02-04 06:05:16 - testbed-node-5 2026-02-04 06:05:16.465140 | orchestrator | 2026-02-04 06:05:16 - testbed-node-3 2026-02-04 06:05:16.549517 | orchestrator | 2026-02-04 06:05:16 - testbed-node-4 2026-02-04 06:05:16.642095 | orchestrator | 2026-02-04 06:05:16 - testbed-node-2 2026-02-04 06:05:16.733439 | orchestrator | 2026-02-04 06:05:16 - testbed-node-1 2026-02-04 06:05:16.818413 | orchestrator | 2026-02-04 06:05:16 - testbed-node-0 2026-02-04 06:05:16.906256 | orchestrator | 2026-02-04 06:05:16 - clean up keypairs 2026-02-04 06:05:16.925319 | orchestrator | 2026-02-04 06:05:16 - testbed 2026-02-04 06:05:16.953730 | orchestrator | 2026-02-04 06:05:16 - wait for servers to be gone 2026-02-04 06:05:27.899115 | orchestrator | 2026-02-04 06:05:27 - clean up ports 2026-02-04 06:05:28.104492 | orchestrator | 2026-02-04 06:05:28 - 7554cec9-8f34-47a0-a0e9-8b5c7b92bd92 2026-02-04 06:05:28.412854 | orchestrator | 2026-02-04 06:05:28 - 86ec0049-5f61-4e8d-b4a4-6c4f98279f40 2026-02-04 06:05:28.763044 | orchestrator | 2026-02-04 06:05:28 - 9435f5ea-08e5-485e-88e2-520ac9468470 2026-02-04 06:05:29.046737 | orchestrator | 2026-02-04 06:05:29 - a99df337-cdf5-4ccb-a410-87f4abcc1af6 2026-02-04 06:05:29.476407 | orchestrator | 2026-02-04 06:05:29 - a9ba93ca-74a8-40f8-825a-e65c96543f4d 2026-02-04 06:05:29.678696 | orchestrator | 2026-02-04 06:05:29 - af2b6ca2-350e-4d82-9499-83c706530046 2026-02-04 06:05:29.918770 | orchestrator | 2026-02-04 06:05:29 - b24c5627-b455-4ffa-84e3-0c182ea7d860 2026-02-04 06:05:30.116781 | orchestrator | 2026-02-04 06:05:30 - clean up volumes 2026-02-04 06:05:30.229940 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-3-node-base 2026-02-04 06:05:30.269550 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-4-node-base 2026-02-04 06:05:30.315960 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-5-node-base 2026-02-04 06:05:30.366403 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-2-node-base 2026-02-04 06:05:30.406653 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-0-node-base 2026-02-04 06:05:30.456223 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-1-node-base 2026-02-04 06:05:30.496977 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-manager-base 2026-02-04 06:05:30.544063 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-1-node-4 2026-02-04 06:05:30.591465 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-5-node-5 2026-02-04 06:05:30.639043 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-8-node-5 2026-02-04 06:05:30.683732 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-0-node-3 2026-02-04 06:05:30.729331 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-7-node-4 2026-02-04 06:05:30.776825 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-3-node-3 2026-02-04 06:05:30.819299 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-4-node-4 2026-02-04 06:05:30.863915 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-2-node-5 2026-02-04 06:05:30.911214 | orchestrator | 2026-02-04 06:05:30 - testbed-volume-6-node-3 2026-02-04 06:05:30.958148 | orchestrator | 2026-02-04 06:05:30 - disconnect routers 2026-02-04 06:05:31.024217 | orchestrator | 2026-02-04 06:05:31 - testbed 2026-02-04 06:05:32.716311 | orchestrator | 2026-02-04 06:05:32 - clean up subnets 2026-02-04 06:05:32.772738 | orchestrator | 2026-02-04 06:05:32 - subnet-testbed-management 2026-02-04 06:05:32.988445 | orchestrator | 2026-02-04 06:05:32 - clean up networks 2026-02-04 06:05:33.162435 | orchestrator | 2026-02-04 06:05:33 - net-testbed-management 2026-02-04 06:05:33.459189 | orchestrator | 2026-02-04 06:05:33 - clean up security groups 2026-02-04 06:05:33.538483 | orchestrator | 2026-02-04 06:05:33 - testbed-management 2026-02-04 06:05:34.170131 | orchestrator | 2026-02-04 06:05:34 - testbed-node 2026-02-04 06:05:34.277074 | orchestrator | 2026-02-04 06:05:34 - clean up floating ips 2026-02-04 06:05:34.316458 | orchestrator | 2026-02-04 06:05:34 - 81.163.192.115 2026-02-04 06:05:34.681327 | orchestrator | 2026-02-04 06:05:34 - clean up routers 2026-02-04 06:05:34.791159 | orchestrator | 2026-02-04 06:05:34 - testbed 2026-02-04 06:05:35.902417 | orchestrator | ok: Runtime: 0:00:20.542772 2026-02-04 06:05:35.906991 | 2026-02-04 06:05:35.907165 | PLAY RECAP 2026-02-04 06:05:35.907298 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-04 06:05:35.907361 | 2026-02-04 06:05:36.041423 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-04 06:05:36.044699 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-04 06:05:36.777147 | 2026-02-04 06:05:36.777315 | PLAY [Cleanup play] 2026-02-04 06:05:36.793816 | 2026-02-04 06:05:36.794002 | TASK [Set cloud fact (Zuul deployment)] 2026-02-04 06:05:36.863273 | orchestrator | ok 2026-02-04 06:05:36.873626 | 2026-02-04 06:05:36.873862 | TASK [Set cloud fact (local deployment)] 2026-02-04 06:05:36.908443 | orchestrator | skipping: Conditional result was False 2026-02-04 06:05:36.923898 | 2026-02-04 06:05:36.924058 | TASK [Clean the cloud environment] 2026-02-04 06:05:38.087433 | orchestrator | 2026-02-04 06:05:38 - clean up servers 2026-02-04 06:05:38.580402 | orchestrator | 2026-02-04 06:05:38 - clean up keypairs 2026-02-04 06:05:38.601511 | orchestrator | 2026-02-04 06:05:38 - wait for servers to be gone 2026-02-04 06:05:38.645295 | orchestrator | 2026-02-04 06:05:38 - clean up ports 2026-02-04 06:05:38.725667 | orchestrator | 2026-02-04 06:05:38 - clean up volumes 2026-02-04 06:05:38.808406 | orchestrator | 2026-02-04 06:05:38 - disconnect routers 2026-02-04 06:05:38.842640 | orchestrator | 2026-02-04 06:05:38 - clean up subnets 2026-02-04 06:05:38.867752 | orchestrator | 2026-02-04 06:05:38 - clean up networks 2026-02-04 06:05:39.036094 | orchestrator | 2026-02-04 06:05:39 - clean up security groups 2026-02-04 06:05:39.070827 | orchestrator | 2026-02-04 06:05:39 - clean up floating ips 2026-02-04 06:05:39.093646 | orchestrator | 2026-02-04 06:05:39 - clean up routers 2026-02-04 06:05:39.462547 | orchestrator | ok: Runtime: 0:00:01.442816 2026-02-04 06:05:39.466576 | 2026-02-04 06:05:39.466794 | PLAY RECAP 2026-02-04 06:05:39.467000 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-04 06:05:39.467075 | 2026-02-04 06:05:39.594785 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-04 06:05:39.597318 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-04 06:05:40.338110 | 2026-02-04 06:05:40.338266 | PLAY [Base post-fetch] 2026-02-04 06:05:40.353512 | 2026-02-04 06:05:40.353642 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-04 06:05:40.419537 | orchestrator | skipping: Conditional result was False 2026-02-04 06:05:40.433301 | 2026-02-04 06:05:40.433495 | TASK [fetch-output : Set log path for single node] 2026-02-04 06:05:40.478066 | orchestrator | ok 2026-02-04 06:05:40.485506 | 2026-02-04 06:05:40.485625 | LOOP [fetch-output : Ensure local output dirs] 2026-02-04 06:05:40.955280 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/5d4c0549b7dc4b04b9061401cc85362e/work/logs" 2026-02-04 06:05:41.238816 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/5d4c0549b7dc4b04b9061401cc85362e/work/artifacts" 2026-02-04 06:05:41.499112 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/5d4c0549b7dc4b04b9061401cc85362e/work/docs" 2026-02-04 06:05:41.522047 | 2026-02-04 06:05:41.522252 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-04 06:05:42.463930 | orchestrator | changed: .d..t...... ./ 2026-02-04 06:05:42.465228 | orchestrator | changed: All items complete 2026-02-04 06:05:42.465333 | 2026-02-04 06:05:43.192575 | orchestrator | changed: .d..t...... ./ 2026-02-04 06:05:43.907626 | orchestrator | changed: .d..t...... ./ 2026-02-04 06:05:43.929392 | 2026-02-04 06:05:43.929513 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-04 06:05:43.964895 | orchestrator | skipping: Conditional result was False 2026-02-04 06:05:43.968517 | orchestrator | skipping: Conditional result was False 2026-02-04 06:05:43.986212 | 2026-02-04 06:05:43.986318 | PLAY RECAP 2026-02-04 06:05:43.986393 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-04 06:05:43.986433 | 2026-02-04 06:05:44.111286 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-04 06:05:44.113709 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-04 06:05:44.850353 | 2026-02-04 06:05:44.850517 | PLAY [Base post] 2026-02-04 06:05:44.865034 | 2026-02-04 06:05:44.865171 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-04 06:05:45.875790 | orchestrator | changed 2026-02-04 06:05:45.886973 | 2026-02-04 06:05:45.887106 | PLAY RECAP 2026-02-04 06:05:45.887185 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-04 06:05:45.887262 | 2026-02-04 06:05:46.010219 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-04 06:05:46.011289 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-04 06:05:46.807236 | 2026-02-04 06:05:46.807451 | PLAY [Base post-logs] 2026-02-04 06:05:46.820381 | 2026-02-04 06:05:46.820524 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-04 06:05:47.261991 | localhost | changed 2026-02-04 06:05:47.277724 | 2026-02-04 06:05:47.277897 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-04 06:05:47.316521 | localhost | ok 2026-02-04 06:05:47.323181 | 2026-02-04 06:05:47.323342 | TASK [Set zuul-log-path fact] 2026-02-04 06:05:47.340223 | localhost | ok 2026-02-04 06:05:47.351492 | 2026-02-04 06:05:47.351618 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-04 06:05:47.389305 | localhost | ok 2026-02-04 06:05:47.396019 | 2026-02-04 06:05:47.396208 | TASK [upload-logs : Create log directories] 2026-02-04 06:05:47.895596 | localhost | changed 2026-02-04 06:05:47.900768 | 2026-02-04 06:05:47.900923 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-04 06:05:48.373622 | localhost -> localhost | ok: Runtime: 0:00:00.006971 2026-02-04 06:05:48.383120 | 2026-02-04 06:05:48.383317 | TASK [upload-logs : Upload logs to log server] 2026-02-04 06:05:48.943103 | localhost | Output suppressed because no_log was given 2026-02-04 06:05:48.946202 | 2026-02-04 06:05:48.946363 | LOOP [upload-logs : Compress console log and json output] 2026-02-04 06:05:49.001167 | localhost | skipping: Conditional result was False 2026-02-04 06:05:49.006549 | localhost | skipping: Conditional result was False 2026-02-04 06:05:49.019431 | 2026-02-04 06:05:49.019617 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-04 06:05:49.065301 | localhost | skipping: Conditional result was False 2026-02-04 06:05:49.065900 | 2026-02-04 06:05:49.069417 | localhost | skipping: Conditional result was False 2026-02-04 06:05:49.081783 | 2026-02-04 06:05:49.081968 | LOOP [upload-logs : Upload console log and json output]